Are We Bad at Forecasting Our Emotions? It Depends on How You Measure Accuracy

How will you feel if you fail that test? Awful, really awful, you say. Then you fail the test and, yes, you feel bad—but not as bad as you thought you would. This pattern holds for most people, research shows. The takeaway message: People are lousy at predicting their emotions. “Psychology has focused on how we mess up and how stupid we are,” says University of Texas Austin psychologist Samuel D. Gosling. But Gosling and colleague Michael Tyler Mathieu suspected that researchers were missing part of the story.  So the two reanalyzed the raw data from 11 studies of “affective forecasting” and arrived at a less damning conclusion: “We’re not as hopeless as an initial reading of the literature might lead you to think,” says Gosling. The study is published in Psychological Science, a journal of the Association for Psychological Science.

If you look at it in absolute terms, says Gosling, it’s true. Take a group of people, ask them to make an emotional prediction, and on average they will get it wrong. “But there’s also a relative way of looking at it,” he explains. You thought you’re going to feel really, really awful when you saw that red F on the top of the paper—and you ended up feeling only awful. I guessed I’d feel moderately bummed and, after flunking, felt only mildly so. You forecast you’d feel worse than I forecast I was going to feel—and relative to each other, we were both right.

The authors combed through the literature with two criteria in mind: the study had to be “within-subject,” meaning the same person did the forecasting and reported the later feeling; and the two reports had to be about the same event. They ended up analyzing the raw data of 11 articles, comprising 16 studies and 1,074 participants.  The results: Indexing relative affective forecasting—that is, looking at individuals and their positions in the group—we’re better predictors than if you measure only the average absolute accuracy.

One way of thinking about it is not objectively better than the other, says Gosling. But relative accuracy might be useful in real life. His example: An HIV clinic has learned that its clients are generally less upset than they thought they’d be at receiving a positive HIV test. But rather than throw counselors at clients at random, the clinic might serve people better if they know in advance who is going to have the worst time of it, and prepare those people for possible bad news.

“The story here is not, ‘are we bad forecasters or aren’t we?’ For me, the story is that past literature says we’re bad at this. And in truth we are bad at it in some ways, but not in others.” The central finding: “It’s complicated.”


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.