New Research From Psychological Science

Read about the latest research published in Psychological Science:

Two Ways to Facial Expression Recognition? Motor and Visual Information Have Different Effects on Facial Expression Recognition
Stephan de la Rosa, Laura Fademrecht, Heinrich H. Bülthoff, Martin A. Giese, and Cristóbal Curio

People are usually good at using facial expressions to infer other people’s emotions. Motor-based theories propose that viewing a facial expression activates a sensorimotor response that causes the viewer to simulate the expression and thus recognize the associated emotion. These theories predict that sensorimotor and visual processes should lead to the same effects in facial expression recognition. The authors tested this prediction by manipulating whether participants viewed faces with happy or fearful expressions (visual adaptation), executed happy or fearful expressions (motor adaptation), or imagined happy or fearful situations (emotion induction); the participants then judged whether faces generated by morphing happy and fearful expressions to different degrees were happy or fearful. Results showed that when participants had visually adapted to fearful facial expressions, they were more likely to judge the morphed face as happy (i.e., the opposite emotion), and vice versa. By contrast, participants in the motor-adaptation and emotion-induction conditions were more likely to judge the morphed face as showing the same emotion to which they had adapted. These results suggest that, in addition to a motor-based route to facial expression recognition, there is also a vision-based route that does not rely on sensorimotor simulation.

Spatial Congruency Effects Exist, Just Not for Words: Looking Into Estes, Verges, and Barsalou (2008)
Anna Petrova, Eduardo Navarrete, Caterina Suitner, Simone Sulpizio, Michael Reynolds, Remo Job, and Francesca Peressotti
In 2008, Estes, Verges, and Barsalou showed that when participants read a word with a spatial connotation (e.g., sky is connotated with up) and later had to identify an unrelated target (e.g., the letter X) presented at the implied portion of the screen (e.g., at the top of the screen), they performed worse than when the target appeared at the opposite location. This interference effect became known as the location-cue congruency (LCC) effect. In this commentary, the authors report nine experiments in which they made small modifications to Estes and colleagues’ procedure. They failed to replicate the LCC effect in eight of the experiments. Instead, they obtained feature-integration effects: Targets were identified more quickly when they were of the same type and shared the same position as those in the previous trial and when they were of different types and were in a different position than when only location or type of target matched. The authors suggest the occurrence of a complex pattern of spatial effects and that the LCC effect might not be empirically reliable.

APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.