What You Know Changes What and How You See 

Can what we know about an object change the way we see it? Or the way we feel about it? If so, could that be because different brain areas process different features of any given object, such as what we know about its uses?  

In this episode of Under the Cortex, APS’s Ludmila Nunes speaks with Dick Dubbelde, a recent postdoc and adjunct professor of psychology and neuroscience at George Washington University, about how quickly and how well we process different objects.  “In an environment such as surgery, where small spatial details are super important, or in an environment like driving, where reaction time is super important, those little differences can add up, especially at the societal scale,” Dubbelde explains. He explores this research more fully in an article he coauthored with Sarah Shomstein in Psychological Science: “Mugs and Plants: Object Semantic Knowledge Alters Perceptual Processing With Behavioral Ramifications.”   

Unedited transcript:

[00:00:12.290] – Ludmila Nunes

Can what we know about an object change the way we see it or the way we feel about it? If so, could that be because different brain areas process different features of an object, such as its use? This is under the cortex. I am Ludmila Nunes with the Association for Psychological Science to speak about how quickly we process different objects and how well we process them. I have with me Dick Double D from George Washington University, with Sarah Sharmstein, also from George Washington University. He co authored an article published in Psychological Science examining how semantic knowledge about objects, specifically tools and non tools, influences the way we process them. Dr. Dubbelde, thank you for joining me today. Welcome to under the cortex.

[00:01:06.290] – Dick Dubbelde

Thank you. It’s great to be here. I’m I’m pretty excited about doing this. It sounds fun.

[00:01:10.410] – Ludmila Nunes

I hope so. As we start, I would like to ask you what you set out to study and why.

[00:01:18.810] – Dick Dubbelde

So I am actually a recent graduate of the grad program here, so I just defended my dissertation recently, and this was an idea that I had in my first year of grad school. I was reading through a bunch of background literature about object semantics and the mechanisms of visual perception and attention. And I started to find a couple of things that seemed to connect. I found research saying that tools were generating activity in the parietal regions of the brain that things like nontools weren’t getting. And then I found other research saying that you have different kinds of neurons in the parietal regions versus the temporal regions. And I started to think these things might be connected. They might lead to you having different perceptions between semantic categories. In this case, tools like a hammer or a mug in the paper, anything that you’re using your hands with. And then the other category is non tools. So anything like a potted plant that typically when you see it, you don’t interact with it.

[00:02:19.780] – Ludmila Nunes

So basically you started connecting different information from the neuroscience field. Like these objects are processed by different areas in the brain. These different areas have different types of cells, so they might be doing something differently in the way they’re processing these objects. And you connected that with information from cognitive psychology, semantic memory on how tools and non tools have different properties.

[00:02:46.790] – Dick Dubbelde

Yes.

[00:02:49.110] – Ludmila Nunes

So what did you do? What was your procedure for this study?

[00:02:54.390] – Dick Dubbelde

So another tent pole of the study that went into it was I found a paper where they had people look at just circles on a computer screen. And those circles could have either a small gap in them or they could flicker on the screen. And so then they had two measurements how easily people could detect that gap or how easily they could detect that flicker. And then what they manipulated was how the participants responded. And so they could either respond on a keyboard like we normally do in any kind of cognitive psychology experiment, or they would press buttons that were on the sides of the computer screen. And so their manipulation was whether participants hands were far away from the stimuli or very close to the stimuli. I read that paper and what I saw was tools and non tools. Tools are things that are near your hands. Non tools are things that are away from your hands. And so I adopted that paradigm. But now instead of having buttons on the computer or near the computer screen versus a keyboard, I had everybody just respond on the keyboard. What I manipulated instead was I took the circles out of that paradigm and I put in images of objects.

[00:04:00.400] – Dick Dubbelde

So I had a group of tool objects that were all line drawings of different tools, like a mug, like a hammer. And then I had another group of line drawings that were all non tools. Like a potted plant, like a fan, like a lamp. A fire hydrant was another one of them. And it was the same procedure. There could be a gap and you would have to say if there was a gap or not. The idea there being if you are seeing a non tool, then you’re primarily using the more temporal parts of your visual system. And those parts of the brain are very good at seeing details. And so we hypothesize that if you have a lot of activity there because you see a non tool, you should be a little bit better at seeing details. You should be better at detecting the spatial gap on those objects. The opposite, just like the research I was talking about earlier, if you see a tool, you get activity in the brief parts of your brain and those parts tend to be a little bit faster. There’s a different kind of neuron there. It has more myelin on the axons, so they end up being faster overall.

[00:04:59.970] – Dick Dubbelde

The prediction with that was that if you see a tool, you get activity in the paragle. You should be better at detecting the flicker of the object when it flickers on the screen.

[00:05:09.490] – Ludmila Nunes

And what did you find?

[00:05:11.730] – Dick Dubbelde

We found support for both of those hypotheses to varying degrees. So the non tool objects, when people saw those, they were better at detecting the little gaps in the objects. So it was as if they were seeing the non tool objects with higher detail. We didn’t originally find the same thing with the flickering on the screen. So when people saw tools, they weren’t better at detecting the flickering. Through a couple of experiments, what we ended up finding though is when we went from a detection task like does the object flicker or does it not? To a discrimination task, how fast is it flickering? People were faster at reporting whether it was a short or a fast flicker when they saw a tool object. So just like when they saw the non tools, they were a little bit better at details. With the tools, our participants seem to be able to see those tools a little bit faster, so to speak.

[00:06:04.150] – Ludmila Nunes

So you found support for your hypothesis?

[00:06:07.270] – Dick Dubbelde

Yes, and we even extended it a little bit. So we replicated that original finding with a different set of objects, which were no longer line drawings, but they were instead more realistic full color images. We used all the same objects, but better images for each one. And we found the same results, which was a great replication. It wasn’t something unique about the specific images. We found two other controls we wanted to do. We wanted to make sure that this was actually semantic. We wanted to make sure that it was because the participants were seeing a non tool or a tool. And so what we did is we just turned all of the objects upside down to make them harder to recognize. There’s a lot of research, especially in, like, face perception, that faces are a lot harder to recognize when they’re upside down. This was actually a suggestion from Dr. Mitrov here at George Washington. He gave me this idea for this experiment. So we turned all the objects upside down with the idea that it would make them harder to recognize. And when we did that, the effect went away. So it did seem to be that this difference between non tools and tools was because of the semantic difference between those groups rather than anything else it could have been.

[00:07:19.760] – Dick Dubbelde

The last thing we did is this idea that neurons in the temporal regions are a little bit better at details and the ones in the prior regions are a little bit faster. I wanted to find some amount of evidence that it was actually those different kinds of cells. What I really wanted to do here was being the neuron nerd that I am, I wanted to connect these semantic things down to an actual neuron. And so I found this weird finding where the neurons in the Pryle, at least the visual neurons in the prideal regions, are inhibited by red light. There doesn’t really seem to be a reason for it. It’s just like an evolutionary quirk that this happens. But we reasoned that if we did our temporal task with a green background or a red background, that should change the strength of our effect. And we found that it did.

[00:08:07.930] – Ludmila Nunes

That’s really cool.

[00:08:09.530] – Dick Dubbelde

Yeah. The results from that one were a little bit quizzical. We found that the red light actually increased the effect, but we were stunned to find that it modulated the effect at all.

[00:08:19.470] – Ludmila Nunes

Yeah, that’s really interesting. Do you have plans to follow up on this research?

[00:08:24.130] – Dick Dubbelde

Not currently, at least personally. As far as the lab, dr. Shamsin’s lab, they’re always doing this kind of semantic research. So almost definitely we’ll get to it eventually, but we’re not currently doing anything.

[00:08:36.380] – Ludmila Nunes

About it because I was wondering what would happen if you used nonobjects. So basically, things that could be an object but don’t actually exist, so there’s no semantic knowledge about them or even using other categories, because here you used objects versus objects, but tools versus non tools, or if you use objects versus, for example, animals.

[00:09:01.320] – Dick Dubbelde

Yeah, so these were both ideas that we had in late 2019, early 2020. We were planning a virtual reality experiment where we would take unknown objects, like grieval kind of things, and we train participants to either interact with them in VR or not, and then see if we could generate this effect. Then obviously we couldn’t do that study. And then the animal question is something that we haven’t considered looking into it more deeply. But it’s always been a question that’s been in the back of our heads, because another thing that the lab would study, and I’ve done some research on this too, is the perception of object size. And there are papers from Dr. Kunkel’s lab showing that you use different parts of the temporal cortex to perceive smaller objects versus larger objects. And then animal objects, like animate objects were always neither of those. They’re just kind of situated in the middle. And so that’s always been a question in the back of our heads. Like, animals aren’t really tools, but they’re not really non tools either. You’re going to interact with them more than a potted plant, but probably less than a hammer.

[00:10:09.830] – Dick Dubbelde

So I’ve always been curious about that, but we haven’t looked at it.

[00:10:13.600] – Ludmila Nunes

Yeah, I’m really curious about that. I think you should look at that. Okay. I want to hear about how we might apply these findings to people’s lives. But first, we need to take a short break.

[00:10:26.970] – Speaker 4

It’s never been a more exciting time to join APS. APS membership gives you free access to a growing number of webinars and virtual events to help you advance your career, exclusive opportunities to contribute and share your science, reduced registration rates for two scientific conferences, and so much more. Ready to join a community dedicated to advancing scientific psychology? Visit member psychologicalscience.org to learn more.

[00:11:00.370] – Ludmila Nunes

And we are back. You’re just telling us about the relationship between object categories and cognitive processing. Can you think of any practical implications of this research?

[00:11:14.710] – Dick Dubbelde

Yeah, this was a question I struggled with for a long time because I just like to talk about cells, and that’s kind of how I got into this. But what we’ve come to realize is that I think this has some real implications for enhanced displays. So the first thing that always comes to my mind, even though it’s not really a thing anymore, is like the Google Glasses, where they would pull up things. But now we’re starting to use virtual reality for all sorts of different things and pulling up different kinds of information that you would need in whatever environment. Like Iron Man in his helmet. And so for these sorts of augmented displays in things like driving or in surgery, something like that, these sorts of perceptual differences are going to be fairly small. If you see a bunch of tools in that display, you might be a little bit faster. If you see a bunch of non tools, you might be a little bit better at spatial details. But in an environment such as surgery, where small spatial details are super important, or in an environment like driving, where reaction time is super important, those little differences can add up, especially at the societal scale.

[00:12:20.130] – Dick Dubbelde

So I think this and other research like this, like if we start to do these studies on how we perceive anime versus inanimate objects or if we can train an object to be a tool or not, I think that’s going to have real implications for how we are presenting information as we further incorporate it into our devices and our fields of vision.

[00:12:43.230] – Ludmila Nunes

So implications not only for the user, but probably also how these environments and virtual reality and artificial intelligence are being designed.

[00:12:54.970] – Dick Dubbelde

Absolutely. And also the artificial intelligence thing. Like as you get more and more into cog psych, you start to realize that we see things because they give us the opportunity to do different things in our environment. And so by understanding how these sorts of semantic categories fit into a human head and how we connect those two possibilities for action I’m not a computer scientist, but I feel like that could have applications for how we’re programming things like AI and robots too.

[00:13:25.520] – Ludmila Nunes

That’s exactly what I talked to when I read your study. I’m also not a computer scientist, but it’s a real translation of how we use things onto how they are mapped onto our brain. And it seems like a very small difference. But that matters, right?

[00:13:43.540] – Dick Dubbelde

And these are differences that exist for a reason. The parietal cortex is largely about connecting things like vision to action. And so it has evolved to be more myelinated. It’s evolved to be faster so that you can react as fast as possible. Temporal cortex, it evolved to be better at these sorts of detail perceptions so that we can tell different plants apart. We know which ones are poisonous and which ones aren’t. And it’s the kind of thing that fascinates me.

[00:14:08.580] – Ludmila Nunes

Yeah, that’s really interesting. This is Ludmila Nunes with APS and I’ve been speaking to Dick Dubbelde from George Washington University, author in an article on how what we know changes what and how we see. Thank you so much for joining me today.

[00:14:26.790] – Dick Dubbelde

Absolutely. And thank you for inviting me. This was great. And I’m glad that people are interested in this article.

[00:14:32.570] – Ludmila Nunes

And if anyone is interested in reading this study or learning more, please visit our website. psychologicalscience.org.

Feedback on this article? Email [email protected] or login to comment. Interested in writing for us? Read our contributor guidelines.


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.