When Making Meaning of the World, the Brain is a Multi-tasker

How does the brain confer meaning on the things we perceive in the world? “Many of us favor the theory that, whether it comes in through the eyes or ears, through reading [or other stimuli], it’s all eventually arriving at a common place where the meaning of things is represented,” says Massachusetts Institute of Technology psychologist Mary C. Potter. “If that were so,” she continues, “you’d expect there to be a problem in extracting meanings simultaneously from different sources.”

That is why Potter and her MIT colleague Ansgar D. Endress were surprised by the findings of their new study, which will be published in Psychological Science, a journal of the Association for Psychological Science. When the authors asked participants to perform two kinds of tasks at the same time—a visual one and a linguistic one—they performed without a hiccup. But handling two visual tasks at once slowed them down.

The study involved 96 participants in total, each participating in one of several experiments. The participants first were shown images of unrelated scenes at a rate of about four per second. That’s fast enough to remember the gist, but not the details, of the scene: men climbing stairs; not four men in sweaters climbing the stairs on the Great Wall of China. After that, the participants were shown 10 scenes at random, half familiar and half new, and asked which they’d seen before. To indicate that they’d gotten the gist of the scenes, they also were asked to remember through word labels (“Men climbing stairs”).

In experiment 2, each of the scenes had a word in the center. The words made a coherent, if nonsensical sentence (e.g. “Miners duly locate truly tired ladies”). Then they were either tested for memory of the scenes or memory of the words. Each participant was tested on one or the other, but they didn’t know ahead of time which it would be, so they had to remember both. In the third experiment, a box with grid lines in it appeared in the center of each image, a visual-recognition task. Participants had to press a button when the lines’ density changed. In another experiment, subjects did the tasks in isolation.

The results: The people doing the linguistic task performed equally well whether the images and words were shown together or not. But the grid-line people had a harder time doing two things at once—even though they could do the density test almost perfectly alone.

“We think it’s because the language system that’s operating when it’s picking up and understanding words is somehow insulated from the scene-recognition system,” says Potter. Whereas “looking for density is the kind of visual analysis that must overlap with the visual process of meaning-extraction from a scene.”

The findings aren’t inconsistent with the unified-system theory—though Potter thinks that system probably operates later in the process. In the earliest stages of translating stimuli into meaning, the authors conclude, linguistic and conceptual processing take place in “remarkably independent channels.”


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.