Your kitchen is full of objects, but it’s safe to say that if you walked in to get a glass of water and found a fire hydrant staring back at you, that particular object would have your full attention.
“The theory of the ‘Bayesian brain,’ the idea that our brain is a hypothesis-testing machine, has become influential over the past decades,” wrote Eelke Spaak (Radboud University) and colleagues in Psychological Science. “According to predictive-processing theories, it is precisely incongruent objects that warrant closest inspection, not congruent ones.”
Per predictive-processing theory, we expect congruent objects to appear in certain settings based on our previous experiences in similar contexts, the researchers explained. Our brains have been found to process these expected objects—such as a tea kettle or a refrigerator in the kitchen—more quickly and more accurately because they align with our predictions. Incongruent objects like the fire hydrant, on the other hand, violate our expectations, generating a prediction error that directs our brain to process the item more deeply before it can be recognized and integrated into our perception of a scene.
This results in a “congruency cost”: Although congruent objects may be processed more rapidly, we tend to pay greater attention to incongruent objects. As a result, we remember fewer details about congruent objects and notice changes to them less quickly.
Spaak and colleagues found just that in the first of two experiments on how congruency costs influence the visual perception of scenes. In this experiment, 100 participants were tasked with detecting changes between two consecutive photos of the same scene. Participants were consistently faster to identify incongruent changes, like a coffee cup appearing in a toilet paper holder, than congruent changes, like a coffee cup appearing in a dishwasher.
Incongruent objects may monopolize our attention because they can give us more “information content” than expected objects, granting us additional insight into our surroundings once the initial prediction error is resolved, Spaak and colleagues wrote.
The possibility remains, however, that findings related to congruency costs could simply be a result of participants approaching a laboratory task strategically, rather than a reflection of how people actually process information in their day-to-day lives.
In a lab setting, the researchers explained, after identifying multiple incongruent objects, participants might start looking specifically for objects that don’t belong. This could lead them to ignore congruent objects in ways they might not under natural conditions. In other words, participants’ heightened awareness of incongruent objects could be the result of a conscious strategy rather than incongruent objects’ perceptual enhancement.
Spaak and colleagues addressed this concern through a second experiment with 200 participants. Here, participants were asked to observe a single scene before identifying which of two similar congruent or incongruent objects had appeared in the photo. The photos included a mix of scenes with and without relevant incongruent objects, so even if a toilet paper roll appeared in the dishwasher, for example, that didn’t necessarily mean the participant would be asked to identify it later.
Nonetheless, participants were still faster and slightly more accurate at identifying incongruent objects than congruent ones, suggesting that unexpected objects made a stronger impression.
“Congruency costs are not due to participant strategy but, rather, reflect an automatic perceptual phenomenon,” Spaak and colleagues concluded. “Because perception in the real world is never of isolated objects but always of entire scenes, these findings are important not just for the Bayesian brain hypothesis but for our understanding of real-world visual perception in general.”
Feedback on this article? Email email@example.com or comment below.
Spaak, E., Peelen, M. V., & de Lange, F. P. (2022). Scene context impairs perception of semantically congruent objects. Psychological Science, 33(2), 299–313. https://doi.org/10.1177/09567976211032676