In Reading Facial Emotion, Context Is Everything

In a close-up headshot, Serena Williams’ eyes are pressed tensely closed; her mouth is wide open, teeth bared. Her face looks enraged. Now zoom out: The tennis star is on the court, racket in hand, fist clenched in victory. She’s not angry. She’s ecstatic, having just beaten her sister Venus at the 2008 U.S. Open.

“Humans are exquisitely sensitive to context, and that can very dramatically shape what is seen in a face,” says psychologist Lisa Feldman Barrett of Northeastern University and Massachusetts General Hospital/Harvard School of Medicine. “Strip away the context, and it is difficult to accurately perceive emotion in a face.” That is the argument of a new paper by Barrett, her graduate student Maria Gendron, and Batja Mesquita of the University of Leuven in Belgium. It appears in October’s Current Directions in Psychological Science, a journal published by the Association for Psychological Science.

The paper—reviewing a handful of hundreds of studies supporting the authors’ position, says Barrett—refutes the contention that there are six to 10 biologically basic emotions, each encoded in a particular facial arrangement, which can be read easily in an image of a disembodied face by anyone, anywhere.

Facial-emotional perception is influenced by many kinds of contexts, says the paper, including conceptual information and sense stimuli. A scowl can be read as fear if a dangerous situation is described or as disgust if the posture of its body indicates reaction to a soiled object.  Eye-tracking experiments show that, depending on the meaning derived from the context, people focus on different salient facial features. Language aids facial perception, as well.  Study participants routinely did better naming the emotions in pouting, sneering, or smiling faces when the experimenter supplied words to choose from than when they had to come up with the words themselves.

Equally important is the cultural context of an expressive face. People from cultures that are psychologically similar can read each other’s emotions with relative ease, an effect that similar language or even facial structure does not produce. Culture even influences where a person seeks information to interpret a face. Westerners, who see feelings as inside the individual, focus their attention on the face itself. Japanese, meanwhile, focus relatively more on the surroundings, believing emotions arise in relationship.

The real-world implications of such research are “substantial,” says Barrett. For instance, it offers needed nuance to the understanding changes in emotion perception in people with with dementia or certain psychopathologies, and even in healthy older people, all of whom “may have difficulty accurately perceiving emotion in static caricature faces, but might do fine in everyday life,” where context is available.  In law enforcement, “the Transportation Safety Administration and the other government agencies are training agents to detect threat or deception using methods based on the idea that a person’s internal intentions are broadcast on the face.” If they’re learning to decipher faces out of context, “millions of training dollars might be misspent,” says Barrett. This means that a misguided psychological notion could be putting public safety at risk.


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.