This month’s column is written by two astute observers of the quirks, ironies, and inconsistencies of human behavior in the wild, and who bring those insights into the laboratory in inventive ways. One is from medicine and one from social psychology. The fact that both are Canadians may or may not be incidental. Each has a knack for finding captivating problems that have both practical significance and theoretical importance. Among other appointments, Don Redelmeier is Professor in the Department of Medicine, Canada Research Chair in Medical Decision Sciences, and Senior Scientist in the Institute for Clinical Evaluative Sciences at the University of Toronto and Sunnybrook Health Sciences Centre. APS William James Fellow Lee Ross is the Stanford Federal Credit Union Professor of Humanities and Behavioral Sciences and a founding member of the Stanford Center on International Conflict and Negotiation. Both have collaborated widely, including with each other. Here, they introduce the Objectivity Illusion, a bias that characteristically gives insight into human behavior in general and medical practice in particular.
-Barbara Tversky, APS President
Insights into pitfalls in judgment and decision-making are essential for the practice of medicine. However, only the most exceptional physicians recognize their own personal biases and blind spots. More typically, they are like most humans in believing that they see objects, events, or issues “as they really are” and, accordingly, that others who see things differently are mistaken.1,2 This illusion of personal objectivity3 reflects the implicit conviction of a one-to-one correspondence between the perceived properties and the real nature of an object or event. For patients, such naïve realism means a world of red apples, loud sounds, and solid chairs.4 For practitioners, it means a world of red rashes, loud murmurs, and solid lymph nodes. However, a lymph node that feels normal to one physician may seem suspiciously enlarged and hard to another physician, with a resulting disagreement about the indications for a lymph node biopsy. A research study supporting a new drug or procedure may seem similarly convincing to one physician but flawed to another.
Convictions about whose perceptions are more closely attuned to reality can be a source of endless interpersonal friction. Spouses, for example, may disagree about appropriate thermostat settings, with one perceiving the room as too cold while the other finds the temperature just right. Moreover, each attributes the other’s perceptions to some pathology or idiosyncrasy.
Medical experts encounter similar conflicts in discussions about alcohol consumption, diet, exercise, weight, sleep, or advanced-care directives. They may disagree about appropriate bedside manner (such as the balance between being honest with patients and giving them hope), to the point that they question each other’s competence or behavior.
Further Consequences of the Objectivity Illusion
A well-documented consequence of the objectivity illusion is the false consensus effect.5,6 People rarely undertake formal surveys to assess the extent to which their judgments reflect a current community consensus. At most, they access the views of a few friendly peers or simply presume that reasonable people generally agree. Physicians, we suggest, succumb to this pitfall when they overestimate whether colleagues share their views, especially those colleagues who have different backgrounds, clinical training, or professional affiliations. As a result, a physician may too readily assume that a medical consensus exists for his or her own practices and too quickly dismiss alternative practices as atypical or uninformed.
Psychological science has demonstrated how individuals fail to give due weight to assessments different from their own. Such underweighting of peer input has been shown in the case of educated adults estimating economic facts, lawyers estimating awards in tort cases, and ballroom dancers estimating their marks from judges.9,10,11,12,13 In each case, the participants achieved less accuracy than they could have by simply averaging their own and their partner’s estimates. The same underweighting of collegial views and the same potential benefit from assigning those views more weight, we suggest, may apply in physician assessments such as in estimating the likelihood of an individual patient’s recovery.
Perhaps the most noteworthy manifestation of the objectivity illusion occurs in the attributions made following a disagreement. The more discrepant one’s own views are from those of a peer, the more the discrepancy tends to be attributed to cognitive or motivational biases rather than sound reasoning.7 We believe the same tendency may occur in the attributions physicians make about each other’s judgments regarding contentious issues such as the degree of blood-sugar control appropriate for diabetic patients, the advisability of frequent mammography for older women, or the likelihood that a particular intern will become an outstanding physician. The objectivity illusion may be particularly rampant in the absence of objective data.
Perceptions of partisan bias are yet another regular manifestation of the objectivity illusion. People on opposing political sides routinely complain that the mainstream media is biased in favor of the other side.8 Similarly, medical professionals commonly allege bias in debates about medical negotiations about fees for specific services, the evidence linking skin diseases to environmental toxins, or the merits of various nontraditional treatments. Physicians on opposite sides of these debates feel that the other side’s flawed arguments are given undue recognition while evidence supporting their own position receives unduly harsh scrutiny. Third-party mediation is often a thankless task and provides no simple solution to the objectivity illusion.
Implications for Better Practice
The illusion whereby a stick in the water appears to be bent due to refraction can be eliminated by removing the stick from the water; however, there is no analogous strategy for overcoming the objectivity illusion in medical judgments because clinical practice is an immersive experience. Although technology can sometimes provide useful objective data, physicians cannot fully avoid confirmation biases, overweighting of vivid personal experiences, or the other biases that distort all human decision-making. Moreover, physicians cannot avoid the conviction that their own assessments reflect sound judgment and experience.
The first piece of advice we would offer for physicians is to at least pause to reconsider their quick intuition. In the words of psychological scientist and Nobel laureate Daniel Kahneman, “think slow.”32 Specifically, consider alternative assessments, including those of colleagues who disagree. When delays in treatment could be lethal, physicians must rely on immediate impressions. More typically, however, there is time for consultation and it is a good idea to ask a colleague for feedback. Thinking slow may also involve reflecting more mindfully about the bases for one’s own assessments. In this regard, we would urge physicians to learn more about classic pitfalls in reasoning and stay updated on research that challenges conventional wisdom.33, 34
A separate collegial conversation may also be a second opportunity to consider situational influences on undesirable behavior that too often is attributed to dispositional flaws. Examples include gaps in medication adherence by patients or hand-washing practices by physicians. Consideration of nudges (e.g., checklists, reminders, appropriate defaults) that might help both patients and physicians translate good intentions into good actions may be another activity in which two or more heads are likely to be better than one.35 The objectivity illusion can be a particularly beguiling pitfall because many patient cases have some objective features, yet the complete presentation has a great deal more margin for interpretation.
Disagreements, whether on single cases or general issues, are unavoidable, but we can suggest a third tactic used by dispute resolution professionals to reduce friction and disparaging attributions. This tactic, as employed in Northern Ireland and the Middle East negotiations, obliges opposing partisans to present the position of the other side until each party is satisfied that the other has faithfully captured its position. This procedure initially proves difficult. Yet when the two sides finally are satisfied with the efforts of their counterpart, greater trust ensues and common ground may materialize. Discussions of medical issues are rarely as hostile as exchanges in other social conflict situations, but some variant of this tactic is worth trying in fights over operating-room space or other heated disagreements in medicine.
Together, insights and collaboration between physicians and researchers may help advance both psychological science and medical practice. We believe the objectivity illusion and other pitfalls from social psychology are examples relevant to physicians. Ultimately, the gains could improve professional collaboration for better patient outcomes.
1Pronin, E., Lin, D., & Ross, L. (2002). Perceptions of bias in self and others. Personality and Social Psychology Bulletin, 28, 369–381.
2Ehrlinger, J., Gilovich, T., & Ross, L. (2005). Peering into the bias blind spot: People’s assessments of bias in themselves and others. Personality and Social Psychology Bulletin, 31, 680–692.
3Gilovich, T., & Ross, L. (2015). The wisest one in the room. New York, NY: Free Press.
4Ross, L., & Ward, A. (1996). Naive realism in everyday life: Implications for social conflict and misunderstanding. In T. Brown, E. Reed, & E. Turiel (Eds.), The Jean Piaget Symposium Series: Values and knowledge (pp. 103–135). Hillsdale, NJ: Erlbaum.
5Ross, L., Greene, D., & House, P. (1977). The false consensus effect: An egocentric bias in social perception and attribution processes. Journal of Experimental Social Psychology, 13, 279–301.
6Marx, G., & Miller N. (1987). Ten years of research on the false-consensus effect: An empirical and theoretical review. Psychological Bulletin, 102, 72–90.
7Pronin, E., Gilovich, T., & Ross, L. (2004). Objectivity in the eye of the beholder: Divergent perceptions of bias in self versus others. Psychology Review, 111, 781–799.
8Vallone, R. P., Ross, L., & Lepper, M. R. (1985).The hostile media
phenomenon: Biased perception and perceptions of media bias in coverage of the Beirut Massacre. Journal of Personality and Social Psychology, 49, 577–585.
9Liberman, V., Minson, J. A., Bryan, C. J., & Ross, L. (2011). Naïve realism and capturing the “wisdom of dyads.” Journal of Experimental Social Psychology, 48, 507–512.
10Soll, J. B., & Larrick, R. P. (2009). Strategies for revising judgment: How (and how well) people use others’ opinions. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35, 780–805.
11Minson, J., Liberman, V., & Ross, L. (2011). Two to tango: The effects of collaborative experience and disagreement on dyadic judgment Personality and Social Psychology Bulletin, 37, 1325–1338.
12Harvey, N., & Fischer I. (1997). Taking advice: Accepting help, improving judgment, and sharing responsibility. Organizational Behavior and Human Decision Processes, 70, 117–133.
13Jacobsen, J., Dobbs-Marsh, J., Liberman, V., & Minson, J. A. (2011) Predicting civil jury verdicts: How attorneys use (and misuse) a second opinion. Journal of Empirical Legal Studies, 8, 99–119.
14Jones, E. E., & Nisbett, R. E. (1972). The actor and the observer: Divergent perceptions of the causes of behavior. In E. E. Jones, D. E. Kanhouse, H. H. Kelley, R. E. Nisbett, S. Valins, & B. Weiner (Eds.). Attribution: Perceiving the causes of behavior. Morristown, NJ: General Learning Press.
15Nisbett, R. E., Caputo, C., Legant. P., & Maracek, J. (1973). Behavior as seen by the actor and as seen by the observer. Journal of Personality and Social Psychology, 27, 154–164.
16Thayler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. New Haven, CT: Yale University Press.
17Ross, L. (1977). The intuitive psychologist and his shortcomings. In L. Berkowitz (Ed.), Advances in experimental social psychology, vol. 10 (pp. 173–220). New York, NY: Academic Press.
18Einhorn, H., & Hogarth, R. (1978). Confidence in judgment: Persistence of the illusion of validity. Psychological Review, 85, 395–416.
19Ross, L., & Lepper, M. (1980). The perseverance of beliefs: Empirical and normative considerations. In R. A. Shweder & D. Fiske (Eds.), Fallible judgment in behavioral research (pp. 17–36). San Francisco, CA: Jossey-Bass.
20Nisbett, R. E., & Ross, L. (1980). Human inference: Strategies and shortcomings of social judgment. Englewood Cliff, NJ: Prentice-Hall.
21Bacon, F. (1960). The new organon and related writings. New York, NY: Liberal Arts Press. (Originally published 1620)
22Lord, C., Ross, L., & Lepper, M. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37, 2098–2109.
23Festinger, L. (1957). A theory of cognitive dissonance. Stanford, CA: Stanford University Press.
24Miller, D., & Ross, M (1975). Self-serving biases in the attribution of causality: Fact or fiction? Psychological Bulletin, 82, 213–225.
25Kruglanski, A. W., & Webster, D. M. (1996). Motivated closing of the mind: “Seizing” and “freezing.” Psychological Review, 103, 263–283.
26Tavris, C., & Aronson, E. (2007). Mistakes were made (but not by me): Why we justify foolish beliefs, bad decisions, and hurtful acts. Orlando, FL: Harcourt.
27Banja, J. D. (2004). Medical errors and medical narcissism. Sudbury, MA: Bartlett & Jones.
28Taylor, S.E. (1983). Adjustment to threatening events: A theory of cognitive adaptation. American Psychologist, 43, 1161–1173.
29Taylor, S. E., Wood, J. V., & Lichtman, R. R. (1983). It could be worse: Selective evaluation as a response to victimization. Journal of Social Issues, 39, 19–40.
30Taylor, S. E., Lichtman, R. R., & Wood, J.V. (1984). Attribution, beliefs about control, and adjustment to breast cancer. Journal of Personality and Social Psychology, 46, 489–502.
31Taylor, S. E., & Lobel, M. (1989) Social comparison activity under threat: Downward evaluation and upward contacts. Psychological Review, 96, 569–575.
32Kahneman, D. (2011). Thinking, fast and slow. New York, NY: Farrar, Stauss, & Giroux.
33Redelmeier, D. A., & Dickinson, V. M. (2011). Determining whether a patient is feeling better: Pitfalls from the science of human perception. Journal of General Internal Medicine, 26, 900–906.
34Redelmeier, D. A., & Dickinson, V. M. (2012). Judging whether a patient is actually improving: More pitfalls from the science of human perception. Journal of General Internal Medicine, 27, 1195–1199.
35Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. New Haven, CT: Yale University Press.