Self-driving cars may not be a positive advancement

Dennis Rutter

In an article titled “Why Self-Driving Cars Must be Programmed to Kill” from technologyreview.com, the author explores the ethical dilemmas surrounding the programming of how a driverless car should react in certain scenarios where a fatal accident is unavoidable. For instance, consider a particular situation where a car must choose between hitting 10 innocent bystanders due, perhaps, or an unexpected crash in front of said car. The driverless car can either swerve out of the way of the 10 people and crash into a light post, guaranteeing the death of the car’s lone occupant or save the 10 bystanders. On the other hand, the car can elect to swerve and hit the 10 bystanders, leading to the deaths of 10 people but saving the occupant of the car. 

The article cites a study in which researchers interviewed people about which programming scenario they would prefer. People tended to prefer the utilitarian programming in which the car would take the route which would minimize the loss of life. However, when the scenario was posed such that the interviewee was in the car, people flip-flopped; they would prefer the course that protects them but results in a higher net loss of life.

While the results of this study may not be entirely shocking, they are intriguing, because it gives a glimpse into how cognitive biases can stifle the introduction of inherently better options for society. Every year, an estimated 1.3 million people are killed in car accidents. Driverless cars, however, essentially guarantee the lowering of that figure in the long-term by eliminating accidents caused by human error. Overall societal utility (i.e. less deaths) compounds as more driverless cars hit the roads through networks effects; more driverless cars operating on the same platform and communicating with one another results in an inherently safer driving environment. Whether the car is operating on a utilitarian platform or a self-interested bias platform, the driving environment is safer regardless, because these cars are so much better equipped for navigating and adapting to uncertainty.

The dilemma of what kind of program to choose, however, is a scenario that has caused a lot of headaches among producers of driverless cars. The choice to implement a program in which loss of life is guaranteed clearly requires very deliberate and close attention. Specifically, it forces one to ask what exactly the point is of automating the act of driving. While answers to this question may vary, the true reason is that machines are smarter than us. They can react to stimuli faster, more efficiently, and from many more sources than we are equipped to. These capabilities will lower that 1.3 million figure exponentially if driverless cars can gain widespread adoption. 

So essentially, we want to automate driving because it saves lives. But whose lives do we want to save? The utilitarian programming would not overweigh the lives of its occupants in relation to the bystanders. In the aggregate, this would make the world safer. But who wants to buy an item that makes his or her personal safety less than ideal? Sure, there are some benevolent consumers out there who may be motivated to buy a driverless car to benefit those around him. However, a utilitarian programming feature would most likely detract from the appeal of switching to driverless. 

Paradoxically, though, the full benefits of automated driving can only be realized if there are a lot of people using these vehicles. If demand for driverless cars were to be held low because the utilitarian programming for unavoidable accidents led consumers to choose to keep their old cars, the benefits of the driverless cars would fail to be at the optimal levels. 

And yet, isn’t it is unconscionable to allow the sale of cars that don’t make the world as safe as they possibly can? While driverless cars running on a utilitarian accident-management program may not be as much of a profitable endeavor, it would be completely irresponsible if controls were not in place to bar the possibility for less than ideal outcomes in the aggregate. 

The best strategy then would be to begin selling the cars with the self-interested programming. Because the cars with the type of programming would be more desirable initially, it would fuel early growth of the industry. Because a driverless car with either of these two types of program provides higher net utility than non-automated driving on a unitary basis, simply getting automated cars on the roads would benefit society. Then, at a certain point where so many cars are on the road that the network benefits would add up to a predetermined level if utilitarian programming were being used, the programming could switch to the utilitarian accident management system from the self-interested one. This strategy would work most effectively because it stimulates demand fastest in the infancy stages of the driverless car market and then switches over to the type of programming that will lead to the highest positive externalities in the long term at some point. 

Whatever strategy is used to roll the cars through, just remember that you’ll always be safer when your car drives itself.