Believing That Others Understand Helps Us Feel That We Do, Too
Our sense of what we know about something is increased when we learn that others around us understand it, according to new research published in Psychological Science, a journal of the Association for Psychological Science. The findings are consistent with the idea of a “community of knowledge” in which people implicitly rely on others to harbor needed expertise. Otherwise everyone would have to be omniscient to get by.
“We think collaboratively,” said lead author Steven Sloman, professor of cognitive, linguistic and psychological sciences at Brown University. “It implies that people have to live in communities in order to succeed, in order to really make use of our mental capabilities. We just can’t do it all as individuals.”
In four web-based experiments involving a total of nearly 700 volunteers, Sloman and corresponding author Nathaniel Rabb of Boston College presented several fake but plausible scientific phenomena with only cursory descriptions and no explanation.
Across several different experimental conditions, volunteers proved more likely to give a higher rating of their understanding of how the phenomena worked if they were told that “scientists” understood it. To be clear, with no actual explanation to go on, most experimental volunteers did not feel like they fully understood the phenomena, which included some new kind of glowing rock or the existence of a rare weather system with helium rain. But what the study showed over its different iterations is that assurance that others understood elevated their sense of understanding to a measurable degree.
“Understanding judgments were generally low, but consistently higher when the only individuals who could conceivably understand the phenomena do understand them,” wrote Sloman and Rabb, a Brown alumnus, in the journal. “The results suggest that the existence of a community of knowledge creates the impression of understanding in oneself.”
The first experiment captured that fundamental idea. Sixty-nine participants read about phenomena and were either told that scientists understood it or did not understand it. On a scale of 1 to 7 to rate degree of understanding, those who read that scientists understood the phenomena averaged 2.42 while those who were told that scientists didn’t understand averaged a 1.79.
In the second experiment, Sloman and Rabb added a twist for its 106 new participants, adding a novel condition: Sometimes scientists couldn’t share their knowledge because it was a government secret. The goal here was to determine whether the knowledge of others has to be accessible to bolster one’s own sense of understanding, or whether merely knowing that someone else understands is all that’s required.
Access mattered. The understanding people reported remained highest when scientists understood and weren’t restricted (1.93) but was notably lower when scientists didn’t understand (1.63) or when they understood but couldn’t share (1.77). It’s not quite a community of knowledge, the results imply, if knowledge can’t be communicated.
In experiment three, Sloman and Rabb investigated alternative explanations for their results. Could it be, for instance, that people simply feel implicit social pressure to say they understand when they perceive that others do? When asked to describe how well they understand the phenomenon, might they instead be confusedly rating how understandable the phenomenon is?
This time, 244 people participated. Some were cued that phenomena of the kind they were seeing were easy to understand, while others were cued that they were hard to understand. When it came time to rate their understanding, they were also asked to rate how understandable the phenomena were. When the phenomena were described as complex, the researchers reasoned, people should feel less social pressure to claim they understood than if they were described as simple. And if people were actually judging understandability, their judgments of understanding should reflect this complexity.
But that’s not what happened. The results showed that whether something was easy to understand or not had no effect on whether people tethered their stated degree of understanding to the perceived understanding of others. For allegedly complex or simple phenomena, whether they were cued that it was hard or easy to grasp, the data showed people remained just as likely to feel they understood it better when they were told that scientists did.
Finally, in experiment four, another 257 volunteers produced further insight. To ensure that people weren’t just mistaking the cursory descriptions for explanations, the researchers made that point explicit by both reducing the degree of description and by reminding readers that description is not explanation.
Nevertheless, even with the most overt clue yet that they weren’t actually being informed, people still conveyed a significantly greater degree of understanding (2.10 vs. 1.90) when they were assured that scientists were on top of things.
To Sloman, the real scientific phenomenon in evidence — that people depend in part on others’ expertise for their own sense of understanding — is a corollary of a division of cognitive labor already in evidence across society: Some people are lawyers, but others are carpenters and still others can troubleshoot a car transmission. The new discovery in the paper is the apparent fact that we’re so dependent on the community of knowledge that it can even lead us to believe we understand something a little better when we don’t understand it at all.
Sloman explores this and other surprising aspects of cognition in the upcoming book, “The Knowledge Illusion: Why We Never Think Alone,” co-authored with his colleague and former student, Brown alumnus Philip Fernbach, now at the University of Colorado.
Sloman and Rabb’s research was supported by the Thrive Center for Human Development, the Varieties of Understanding Project at Fordham University and the John Templeton Foundation.
All data and materials have been made publicly available via the Open Science Framework and can be accessed at https://osf.io/24kvu/. The complete Open Practices Disclosure for this article can be found at http://pss.sagepub.com/content/by/supplemental-data. This article has received the badges for Open Data and Open Materials. More information about the Open Practices badges can be found at https://osf.io/tvyxz/wiki/1.%20View%20the%20Badges/ and http://pss.sagepub.com/content/25/1/3.full.
APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.
Please login with your APS account to comment.