Self-Driving Cars Face Psychological Speed Bumps

The technology to put autonomous vehicles on the road in large numbers is available, today. Manufacturers are pushing the accelerator—General Motors announced its readiness to mass-produce autonomous cars in mid-June. The US government is trying to keep pace, with the House Energy and Commerce Committee approving the SELF DRIVE Act unanimously earlier this month. The bill allows for the National Highway Traffic Safety Administration (NHTSA) to create some nationwide autonomous vehicle standards and allows NHSTA to give road testing permits to autonomous vehicle manufacturers.

Industry and regulators seem ready to embrace autonomous cars, but are consumers? Even after manufacturers demonstrate that these cars drive safely, what mental barriers will they have to leap before the public accepts self-driving vehicles on a large scale? Researchers Azim Shariff, Jean-Francois Bonnefon, and Iyad Rahwan have examined three mental roadblocks to adopting autonomous vehicles and offer solutions for each.

Ethical Dilemmas

Autonomous vehicles will bring safety improvements, but some vehicle collisions are inevitable. According to public sentiment research, many people are uncomfortable with a car figuring out whom to harm in a situation where a collision is unavoidable.

Despite our (over)confidence in our driving ability and desire for control in these life-or-death situations, human reflexes are slower than sensors and computers, which means that autonomous cars can brake earlier and steer more accurately than humans can. Car manufacturers will also have the advantage of considering ethical dilemmas well ahead of time. This means that the situation can be carefully thought out by a team of people, instead of based on knee-jerk driver discretion.

Shariff, Bonnefon, and Rahwan suggest two pathways for addressing these concerns. First, they advise highlighting the absolute benefits of autonomous vehicles, such as how many lives they will save overall by replacing human-driven cars.

In addition, they suggest using the benefits of autonomous cars as a way of harnessing people’s desire to signal virtue. Driving an autonomous car can stand as a symbol of one’s commitment to safety and efficiency. The authors offer the Toyota Prius as an example of this type of virtue signaling, pointing out that the distinct shape of the car works as a selling point–Prius cars stand out on the road, and they symbolize the owner’s commitment to the environment. The image of an autonomous car as one of reduced human error and accidents may not specifically address people’s concerns about cars being in control in crash scenarios, but could redirect attention to the benefits of the new technology.

Reactions to Accidents

The first fatality involving Tesla’s semi-autonomous ‘Autopilot’ mode happened in the Spring of 2016 and news outlets have given it more than a little attention (1,2,3,4). A traffic death involving an autonomous vehicle is news because it is new, not because it is common. There will, no doubt, be more accidents and even more news coverage. Human fears often follow news coverage rather than the statistics, and that fear may make it difficult to give a computer the wheel even if it is safer.

Shariff and co-authors suggest anticipating the problem with frank communication from car manufacturers and policymakers alike. These advocates can prepare the public for the inevitable accidents that will occur involving autonomous cars, and remind people of the overall safety improvement autonomous cars bring. Developers and manufacturers can openly communicate about algorithmic improvement to give the public a sense of constant improvement:

“Autonomous vehicles are better portrayed as being perfected, not as being perfect,” Shariff, Bonnefon, and Rahwan write.

Politicians also have a responsibility to the address the public’s concerns, so they may need to take steps to reassure their voters of vehicle safety. This can include communications reminding the public about autonomous car safety and relevant policy action. Politicians may opt for what the researchers call ‘fear placebos’—high-visibility, low-cost measures that reassure the public by showing concern while not affecting the overall adoption and growth of autonomous vehicles.

Opacity of Algorithms

Drivers make dozens of decisions each minute they are on the road. So do autonomous vehicles. Some autonomous vehicle ‘training’ happens through machine learning, a process by which computers develop increasingly sophisticated behavior as data comes in, often guided by human feedback as to whether the machine is demonstrating the desired behavior or not. The actual decision process is multivariate and complex and human passengers may find it hard to trust an autonomous vehicle that is making life-or-death decisions when the human has no insight into how these decisions are made. And if manufacturers actually provided a detailed report of all the calculations that go into driving down a residential street, it would likely overwhelm the average user to the point of anxiety.

This particular issue may be thornier than others to fix. Shariff and colleagues note that researchers will need to study the best communications techniques surrounding autonomous cars. Certain metaphors and mental models will no doubt ‘click’ with the public better than others, and finding those will be in the best interest of drivers, car companies, engineers, and policymakers alike.

A New Direction

While our roads and our customs for driving on them evolved over nearly a century, the standards and customs surrounding autonomous vehicles will likely develop on a compressed time scale as technology advances. Luckily, the authors point out, we have an opportunity to go about this transition deliberately, and doing so can ease the growing pains of this new technology.

References

SELF DRIVE Act. H.R. 3388, 115 Cong. (2017)

Shariff, A., Bonnefon, J., & Rahwan, I. (2017). Psychological roadblocks to the adoption of self-driving vehicles. Nature Human Behaviour. doi:10.1038/s41562-017-0202-6

Thompson, C. (2017, June 20). New details about the fatal Tesla Autopilot crash reveal the driver’s last minutes. Business Insider. http://www.businessinsider.com/details-about-the-fatal-tesla-autopilot-accident-released-2017-6

Zell, E., & Krizan, Z. (2014). Do people have insight into their abilities? A metasynthesis. Perspectives on Psychological Science, 9, 111-125. doi:10.1177/1745691613518075

 

Comments

A couple of years ago while I was driving I heard a Senior Vice President of GM discussing self driven cars and his expectations and visions. His main premise was it is coming and we, the people, will have no choice. He said that the auto makers will come down on Federal and State legislatures and “sell” them on the idea by reducing drastically human deaths. To him it was a no brainer. Then he predicted that almost no one will own a vehicle driven on public roads. One would simply car for a car which would be directed to your location. The car would be programed to proceed to the nearest freeway or major artery and would join up with people and cars going the same direction. The car would exit at the nearest point and drive you to your destiny. People who wanted to still drive their cars would only be able to do so at private tracks in rural areas. I was impressed by his logic and rather matter of fact presentation.
– John from http://www.insurancepanda.com

Who is the author of this article?


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.