Every year more car manufacturers are including automated features in their vehicles, ranging from adaptive cruise control to automatic parallel parking. Several companies, perhaps most notably Google, are already well on the road to developing fully self-driving cars, claiming that fully automatic cars are safer than human drivers—or at least they will be very soon.
While some have touted the benefits of self-driving cars to include reductions in traffic, pollution, and traffic injuries, the average person will have to be convinced that these smart machines are trustworthy before handing over the keys.
In a new study, psychological scientists Frank Verberne, Jaap Ham, and Cees Midden of Eindhoven University of Technology evaluated whether giving this complex technology a more human face, in the form of a virtual driving agent, would help increase people’s trust of the smart driving system.
Similarity increases trust between individuals, and previous research has shown that signals of similarity can even increase people’s feelings of trust towards objects. The researchers suspected that increasing the similarity between a person and the virtual agent of a self-driving car would increase perceptions of the virtual agent as trustworthy, likeable, and competent.
“Just as similarity between humans increases trust in another human, similarity also increases trust in a virtual agent. When such an agent is presented as a virtual driver in a self-driving car, it could possibly enhance the trust people have in such a car,” the researchers write in Human Factors: The Journal of the Human Factors and Ergonomics Society.
For the study, 111 participants interacted with a virtual agent called Bob. Half of the participants interacted with a version of Bob whose face, head movements, and driving goals (e.g., comfort, speed) were customized to match their own. Photos of the participants’ faces were digitally morphed together with a default digital male face. This morphed face contained 50% of the shape and texture of the participant’s face and 50% of the default male digital face. The other group of participants interacted with a version of Bob that was not manipulated to resemble them.
Trust in Bob was measured indirectly with an investment game in which participants had to choose how many credits to give to Bob. Every credit given was tripled, and Bob would then decide how many credits to give back to the participant. The researchers’ assumption was that the more the participants trusted Bob, the more credits they would be willing to give him in the game.
As another measure of trust, participants planned routes in a driving simulator with Bob’s help, allowed him to take the wheel, and then indicated whether or not they trusted him to take over driving the car.
In the investment game, there was no significant difference between how much people were willing to wager with a similar Bob compared to the default; but when it came to trust behind the wheel, greater similarity to Bob resulted in more trust.
Results indicated that drivers who perceived Bob to look, act, and think like they did were much more likely to trust his abilities behind the wheel and expressed less concern over their physical safety. This effect was mediated by perceived similarity, suggesting that trust in the similar Bob was increased because participants perceived him to be similar to them.
“Smart cars have the potential to decrease congestion, save fuel, and, most important, save lives. However, this potential can only be fully realized when human drivers trust smart cars enough to hand over driving control,” Verberne and colleagues conclude.
Verberne, F. M., Ham, J., & Midden, C. J. (2015). Trusting a virtual driver that looks, acts, and thinks like you. Human Factors: The Journal of the Human Factors and Ergonomics Society. doi: 10.1177/0018720815580749