The most recent iteration of Google’s self-driving car has no gas pedal, brake, or even a steering wheel. All that’s left for the so-called driver to control are two buttons: one to start the car and one for emergency stops. Autonomous vehicles – cars that can control their own steering and speed — are expected by some engineering groups to account for up to 75% of vehicles on the road by 2040. But do people trust robot cars enough to let them take over at the wheel?
Psychological scientists Adam Waytz (Northwestern University), Joy Heafner (University of Connecticut), and Nicholas Epley (University of Chicago) found that one way to potentially improve the self-driving car experience is to make autonomous cars seem more human.
Simply giving self-driving cars in a driving simulator human-like qualities, such as simple as a name and a gender, significantly increased people’s trust in the capabilities of the vehicle.
“Technological advances blur the line between human and nonhuman, and this experiment suggests that blurring this line even further could increase users’ willingness to trust technology in place of humans,” Waytz and colleagues write in the Journal of Experimental Social Psychology.
For the study, 100 participants were divided into three driving conditions: normal, agentic, and anthropomorphic. In the normal condition, participants operated a normal vehicle in the driving simulator without any automatic driving features. In both the agentic and anthropomorphic conditions, participants “drove” a vehicle that autonomously controlled its own speed and steering. However, in the anthropomorphic condition, the vehicle had several humanizing features, including a name (Iris), a gender (female), and a human voice that provided instructions.
During the experiment, participants were hooked up to heart rate monitors to measure their physiological arousal and their behavioral responses were videotaped for later analysis.
After taking their respective vehicles for a spin in the driving simulator, participants answered a questionnaire asking them to rate their feelings of trust and affinity for the car they’d driven: how much they liked the vehicle, how enjoyable their driving was, and whether they trusted the vehicle to drive the next course safely. Participants also rated the human-like qualities of their car, such as how smart the car was and how well it could “feel” what was happening around it.
About 6 minutes into the second driving course, another car suddenly swerved in front of the drivers, causing an unavoidable crash. In the scenario, it was clear that the other vehicle was at fault.
The drivers then completed a final questionnaire in which they assessed how responsible they, their car, the people who designed the car, and the company that developed the car were for the accident.
As expected, those who drove vehicles with enhanced human-like features reported significantly better interactions with their vehicle compared to those in the other two conditions. Those in the anthropomorphic condition reported more trust in their vehicle, a more relaxed heart rate response during the accident scenario, and blamed their vehicle and related entities less for the accident.
The results indicate that people attribute more human-like mental capacities to machines that possess even basic anthropomorphic qualities. Essentially, making cars appear more human helps people believe that the car is capable of performing at human standards.
“The more technology seems to have humanlike mental capacities, the more people should trust it to perform its intended function competently,” the researchers write.
Waytz, A., Heafner, J., Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52, 113–117. doi:10.1016/j.jesp.2014.01.005