If you follow the news a little bit, it will be highly difficult not to see coverage on autonomous cars. As one of the most hyped technology concepts in recent times, self-driving cars attract the imagination of hundreds of people and great investments from some of the biggest companies in the world. Great as they are, at least in theory, they won’t be widely used on the streets in the foreseeable future, in my opinion. In this piece, I’ll explain the two chief reasons for my pessimism on the technology’s outlook. As I am not a developer, it won’t be credible of me to talk about its feasibility from a technical standpoint. I’ll leave it to others who are more knowledgeable. My pessimism is largely based on logical deductions and what I have read on the topic.
This is what I think will be the biggest hindrance to the ubiquitous adoption of self-driving cars. Let’s first talk about a classic ethical problem called “The trolley problem”. Imagine you were the driver of a train that, on the current course, would kill five unfortunate track workers who were not aware that the train was coming. To avoid the collision, you could make the train go off track, but you would kill one person nearby. What would you do?
Let’s add a little twist to the scenario. Imagine while you were strolling, you spotted the training coming and going to kill 5 workers on the track. In front of you was there a stranger. If you pushed the person onto the track, the train would kill the person and go off track, sparing the five workers. What would you do?
As self-driving cars are expected to replace us human-beings in managing the cars and behaving in every situation, they would have to deal with highly controversial and tough situations such as the two above. To do so, the cars would have to be programmed in advance by developers. The question now is whether the developers can turn ethics into code. The task is monumentally challenging as there are countless scenarios that could happen in our life. Each scenario has so many moving parts, each of which, if changed, could warrant a new solution or different approach. I don’t see how developers can deal with such a gigantic number of scenarios that could happen.
Even if such a capability is feasible, the question now turns to what standards will be applied to the ethics translated into code. We differ from one to another in how we view an ethical situation and how to deal with it. Some would say that the collective good matters more and hence killing one person would be, for the lack of a better word, a better outcome than killing five people. Others would disagree with that judgement, citing the indisputable value of human lives and refusing to kill any on purpose. What standards would developers use to translate ethics into code?
Imagine some ethically challenging scenarios in your head. Like a family member is badly injured and you are driving him/her to the hospital. Would you run the light? Would you illegally change lanes to go faster? If the car was autonomous, how would the computer know when it would be appropriate to break the rules? When would the ends justify the means?
To make matters worse, what if an accident was caused by an autonomous car with a human in it, who would be liable? Would it be the car producers or the car owners? If such a clarification is not clear cut, it would be inconceivable that insurance issuers would agree to insure the cars.
If autonomous cars are successfully conceived, would it benefit the society as a whole to just simply switch from human-driving to self-driving mode? If 100 car owners switch from ordinary vehicles to their autonomous counterparts, would such a transition cure the traffic issue and as a result, save time for us? Or we would still be stuck in traffic jam and the difference would only be that we would be hand-free and checking our phones?
The only scenario that could benefit the society is when most of us would use autonomous public transportation. That way, there will be many fewer cars on the streets. Almost nobody would need to own a car, have trouble finding a parking slot or pay for fuel. Imagine a world in which you step out of office and the public transportation system is smart enough to have a metro or bus that will take you home by itself in the most optimized route possible. Imagine a world in which so much of public space would be saved from being parking slots for cars.
However, an autonomous public transportation would require a complete overhaul of our current infrastructure; which seems highly unlikely for me. Implementation in a small area may happen, but it would take a lot more effort and time to see it scaled.
Of course, the technology empowering autonomous vehicles can well be used in other industries or applications. The same phenomenon has happened with space technology for years. However, regarding autonomous vehicles alone, I don’t see a mass implementation, as expected by many, happen any time soon. While I remain excited to see what unfolds in the future, unless any major breakthrough is unveiled, I am not really convinced. Yet.
One thought on “Why I don’t think we will have autonomous vehicles anytime soon”