People Are Overly Confident in Their Own Knowledge, Despite Errors

Overprecision — excessive confidence in the accuracy of our beliefs — can have profound consequences, inflating investors’ valuation of their investments, leading physicians to gravitate too quickly to a diagnosis, even making people intolerant of dissenting views. Now, new research confirms that overprecision is a common and robust form of overconfidence driven, at least in part, by excessive certainty in the accuracy of our judgments.

The research, conducted by researchers Albert Mannes of The Wharton School of the University of Pennsylvania and Don Moore of the Haas School of Business at the University of California, Berkeley, revealed that the more confident participants were about their estimates of an uncertain quantity, the less they adjusted their estimates in response to feedback about their accuracy and to the costs of being wrong.

“The findings suggest that people are too confident in what they know and underestimate what they don’t know,” says Mannes.

The new findings are published in Psychological Science, a journal of the Association for Psychological Science.

Research investigating overprecision typically involves asking people to come up with a 90% confidence interval around a numerical estimate — such as the length of the Nile River — but this doesn’t always faithfully reflect the judgments we have to make in everyday life. We know, for example, that arriving 15 minutes late for a business meeting is not the same as arriving 15 minutes early, and that we ought to err on the side of arriving early.

Mannes and Moore designed three studies to account for the asymmetric nature of many everyday judgments. Participants estimated the local high temperature on randomly selected days and their accuracy was rewarded in the form of lottery tickets toward a prize. For some trials, they earned tickets if their estimates were correct or close to the actual temperature (above or below); in other trials, they earned tickets for correct guesses or overestimates; and in some trials they earned tickets for correct guesses or underestimates.

The results showed that participants adjusted their estimates in the direction of the anticipated payoff after receiving feedback about their accuracy, just as Mannes and Moore expected.

But they didn’t adjust their estimates as much as they should have given their actual knowledge of local temperatures, suggesting that they were overly confident in their own powers of estimation.

Only when the researchers provided exaggerated feedback — in which errors were inflated by 2.5 times — were the researchers able to counteract participants’ tendency towards overprecision.

The new findings, which show that overprecision is a common and robust phenomenon, urge caution:

“People frequently cut things too close — arriving late, missing planes, bouncing checks, or falling off one of the many ‘cliffs’ that present themselves in daily life,” observe Mannes and Moore.

“These studies tell us that you shouldn’t be too certain about what’s going to happen, especially when being wrong could be dangerous. You should plan to protect yourself in case you aren’t as right as you think you are.”

Comments

How is Don Moore’s work different than Dunning Kruger effect?


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.