Cover Story

Researching the Extremes of Visual Perception

StX-ray bagaring at bag after bag on an x-ray monitor at an airport check-in line, a security officer can get used to the routine of not seeing anything suspicious. Nail file, book light, knitting needles, machete — all clear. Whoops.

APS Fellow Jeremy Wolfe of Brigham & Women’s Hospital and Harvard Medical School  aims to understand and reduce this sort of mistake, the mental blunders we can make at the limits of our visual searching abilities. Airport security screening has attracted his attention because it pushes our visual perception to these limits: Screeners need to scan through enormous amounts of visual data in order to find rare target objects that might not even appear. Wolfe suggests that the scarcity of these target objects can make it harder for screeners to find the objects even when they are present.

In a 2005 study published in Nature, Wolfe had subjects act like security personnel and scan mock x-ray images of baggage for targets that appeared at different frequencies in different trials. When targets were less frequent, scanners spent less time searching for targets and were more likely to miss them. “If you don’t find it often,” summarizes Wolfe, “you often don’t find it.” Other results from Wolfe’s lab suggest that a brief period of re-training for security personnel might help to adjust search behavior in a way that could reduce mistakes.

Wolfe’s airport security work is practical and visible, but it’s just one part of an enormously influential body of work on visual attention that helps to explain just how we find what we’re looking for, a problem that presents surprising complexities.

“It’s a bit like the search for Waldo,” explains APS Board Member Anne Treisman of Princeton University, referring to the popular children’s books that challenge the reader to pick out a uniquely-dressed character from a large crowd of people. “How do we do this when an object is defined only by a conjunction of features that are shared by other objects in the scene?” Treisman’s work on visual attention — in particular, the notion that we bind features of objects together to form a unified concept of an object — laid the groundwork for Wolfe’s research.

Wolfe, who will be delivering the Keynote Address at the upcoming APS convention in May, has helped outline a model of visual processing that contains two pathways: selective and non-selective. The selective pathway is able to pay detailed attention to a particular object and bind its features together into a cohesive image, but is limited by its processing power and can only recognize one object, or perhaps a very small number of objects, at a time. On the other hand, the non-selective pathway can recognize basic attributes of the visual world very quickly, without being able to bind together the features of any of the objects in the visual field. Together, these two pathways combine to paint a picture that seems relatively rich and that can guide us through the visual world without the need to fully process every piece of visual information we receive at the same time.

One of Wolfe’s key contributions to the two-pathway model is the idea of guided search. In the model, the non-selective pathway gives us access to disconnected pieces of information from all over the visual field: colors, shapes, sizes, and other features of objects. These help to guide our selective attention to cast its searchlight on a specific object based on what we’re looking for by pulling it toward the features we seek. “Guided search was a major contribution to the problem,” says Treisman.

The guided search model builds off of Treisman’s Feature Integration Theory. In both models, initial “preattentive” processing cues us in to the patterns, colors, and shapes that constitute Waldo’s basic features. Guided search proposes that your attention is attracted to the red-and-white stripes of Waldo’s shirt, the shape of his circular spectacles, and his understated blue jeans. With a bit of guidance, your attention can be guided to the real Waldo: the place where all of these features meet.

Waldo searching is interesting because it is difficult. “Under normal circumstances, we’re doing visual searching all day, every day, so brilliantly well that it doesn’t occur to do that you’re doing a search task!” exclaims Wolfe. “The cases that are useful to research — and fun to demonstrate — are the places where this breaks down. We’re doing psychological atom smashing here, in order to figure out how you search under normal circumstances.”

The stresses of airport security have provided one important laboratory for Wolfe’s work, but another line of his research has made use of the phenomenon of change blindness. Integral to a magician’s act, change blindness is what can cause us to miss a substantial change in an object that we’re not paying attention to, as long as the change doesn’t disrupt the gist of the scene. The change can be dramatic too: A 1999 study by Simons and Chabris showed that subjects paying close attention to a group of actors on a screen could be completely oblivious to an actor in a gorilla suit walking across the frame.

In Wolfe’s description of the phenomenon, our non-selective visual processing and our prior experience with the scene provides the gist of an image while our selective visual attention provides current updates for just a few of the objects in the scene at any moment. This combination of information convinces us that we can fully see and understand the scene in front of us, so we’re surprised to find out that we weren’t actually sure what we were looking at.

Wolfe’s work with change blindness has helped to demonstrate the limits of our visual attention: A 2006 study showed that our ability to detect change may have an even lower capacity than the standard estimates of about four objects.

Wolfe thinks that basic research on attention might help to find the most effective ways for a human brain to work together with a computer to conduct a successful visual search. “Many, many years of research have failed to make machines smart enough to get the human out of the search task,” laments Wolfe. In routine breast cancer screening, for example, about three tenths of a percent of mammograms will show tumors. This low prevalence complicates what is already a difficult visual search task. Computer Aided Detection (CAD) systems can highlight tissue that looks potentially cancerous, but since the CAD system is imperfect, a person is still needed to search the highlighted tissues and to check if the computer missed something.Rainbow Eye

The same is true of airport security, where a computer can point out objects that are potentially dangerous, but security personnel need to make sure they’re looking at a real handgun and not a hairdryer.

Despite the visual limitations that Wolfe’s research has helped to discover, human vision has a lot going for it. “People are great at looking for things!” he reminds us. “If you sent your laptop into the kitchen and told it to find the coffee, then find the coffee mug, it’d take hours to process that visual data, even though you could do that ridiculously easily.”

“Jeremy Wolfe’s work tells you things about the senses that aren’t obvious,” lauds Linda Bartoshuk, APS President and a sensory researcher. “He shows how scholarly work in attention can be applied to real-world problems.”

“It’s easy to say you just see stuff,” says Wolfe. “But you don’t realize that the act of seeing involves this active attention mechanism bouncing around the world at a rate so fast you can’t introspectively appreciate it.” Wolfe’s work has made enormous strides towards picking apart the deceptively complex world of visual attention, and has shown how the theoretical structure of perception can impact our daily lives.


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.