About Face

The motorcycle accident happened in August of 1980 and life would never be the same for the 39-year-old driver. His right arm endured significant damage, and he was right-handed. His judgment of construction design disappeared, and he was a city planner. Scenic landscapes and potential mates, once alluring sights, now aroused in him nothing, blending instead into indistinguishable images he could describe only as “dull.”

If after the collision the patient inspected himself in a mirror, he might not have recognized the face looking back, but not because unfamiliar bandages or bruises distorted the view. Rather, this lack of recognition would have been caused by the patient’s most prominent, and likely most painful, consequence of the accident: prosopagnosia, a disorder that renders a person unable to identify a face, be it the mailman’s, a spouse’s, or even one’s own.

“It’s an amazingly devastating disorder, to wake up and not be able to distinguish your family member,” says Russell Bauer of the University of Florida, who studied the victim for many years and is co-editing an upcoming special issue on face processing for the Journal of Neuropsychology.

Deficits in facial perception caused by brain damage have been described by psychologists since the late 19th century, but the roots of the modern understanding of prosopagnosia are traced to Joachim Bodamer, who coined the term some 60 years ago. The condition is simultaneously comical, horrifying, and fascinating. It serves as the subject of one of Oliver Sacks’ most beloved clinical stories, The Man Who Mistook His Wife for a Hat. Normally, people see another person “through his persona, his face,” writes Sacks. But for the patient with prosopagnosia, “there was no persona in this sense – no outward persona, and no person within.”

The disorder’s complexity and curiosity sparked a scientific interest in face processing that continues today. Research in the field, once primarily focused on clinical cases, has evolved to include functional imaging and genetic research.

For all its advancements, however, face perception remains a contentious area. Scientists agree that one region of the brain, dubbed the fusiform face area, plays a major role in facial recognition. They disagree, though, on whether that area plays other roles as well and on whether face processing occurs innately or through gradual expertise.

“Face processing has been something that has interested people for many, many years,” says Bauer of the decision to prepare a special issue of the journal. “We felt it was time to take stock of where we were.”

Face First
From the moment they’re born, infants begin gathering information on faces. Studies have shown that, within just a few exposures, newborns become so familiar with their mother’s face that they prefer it to a stranger’s. Almost as quickly, infants seem to gaze longer at faces that adults deem attractive than to those considered unattractive.

“We’ve come to know more in the last five to ten years about how infants respond to the social attributes of faces,” says APS Fellow Paul Quinn of the University of Delaware.
Some of these findings seem counter-intuitive. After all, shouldn’t a newborn seek out novelty – the stranger as well as the parent, the ugly as well as the beautiful – so as to grow familiar with a range of colors and characters? In 1964, Robert Fantz reported in Science that babies do indeed show more attention to new stimuli.

But that general rule goes out the window when human faces enter the picture, says Quinn. Many years ago, Quinn reported that infants just a few months old prefer silhouettes of human heads to those of animals. In a 2002 issue of Science, researchers reported that 6-month-old infants can individualize the faces of monkeys, but that this ability disappears by 9 months and remains generally absent in adults. More recently, Quinn found that by the time infants are 3 months old, those reared primarily by women prefer female faces – and vice versa for those cared for mostly by men.

“This suggests to me that infant-looking is directed by two systems of motivation,” Quinn says. “A social system which directs attachment relationships with familiar objects, and then a non-social system that directs infants to explore properties of novel objects in their environment.”

Recently, Quinn and a group of researchers led by David J. Kelly of the University of Sheffield in England wondered whether a preference for facial familiarity leads to a recognition bias toward certain races, as it does for genders and species. Do they, as the objectionable cliche goes, truly all look the same to you?

To find out, the researchers gathered an equal split of nearly 200 Caucasian infants who were 3-, 6-, and 9-months old. Sitting on their mother’s laps, the infants looked at images of faces from four ethnic groups, African, Asian, Middle Eastern, and Caucasian, projected onto a screen. Meanwhile, experimenters recorded the eye movement and gaze length of the tiny faces, to determine if recognition had occurred.

By 9 months, infants only recognized faces within their own racial group, the researchers reported in the December 2007 Psychological Science. Six-month-olds tended toward this direction. The youngest group, meanwhile, recognized faces of all different races.

Perhaps frequent exposure to a certain race leads infants to “process same-race faces as individuals, but other-race faces at the…category level,” Quinn says. “You come into the world with a representation of a face that is unspecified with respect to these social attributes. Then, depending on the type of experience you have, your face representation becomes tuned to particular values.”

Expertise vs. Specialty
The findings in infants represent both sides of the larger debate of how people process faces. On one hand, the fact that infants nail down a mother’s face after just a few looks implies some specialized, innate understanding. On the other hand, the impact of the caregiver’s gender and race on facial perception suggests some gradual acquisition of facial expertise.

“Although we know that faces are a special class, we still don’t know if they require a special module in the brain. We haven’t really nailed down the effects of learning,” Bauer says. “The balance is something we don’t quite understand.”

Brain imaging evidence suggests that several areas of the brain play a role in face processing – perhaps none more than the fusiform gyrus, which is located behind the right ear. In many studies, this region has responded so much more strongly to facial images than to non-facial objects that it has acquired a more telling name: the fusiform face area.

Researchers who believe that this area innately specializes in face perception can point to a bundle of evidence. Patients with prosopagnosia cannot recognize faces after their right temporal lobe is damaged, but patients with other types of brain damage retain the ability to recognize faces but can’t distinguish objects. Research in monkeys has long shown areas of the brain that respond highly to facial images; in a 2006 issue of Science, a group of researchers reported an area of macaque brains in which 97 percent of neurons responded resoundingly more to faces than to other stimuli.

More recently, a team of researchers including Bradley C. Duchaine of University College London delivered transcranial magnetic stimulation, or TMS, to subjects’ right occipital face area, another region thought to play a large role in facial perception. TMS disrupted the ability to process faces but had no impact on the perception of houses, the authors reported in a September 2007 issue of Current Biology.

Earlier in 2007, Duchaine and two other researchers argued that the growing evidence for facial specialty has reached a point where one can claim a “clear resolution” of the debate. “Cognitive and neural mechanisms engaged in face perception are distinct from those engaged in object perception,” they conclude in the January issue of TRENDS in Cognitive Science.

Not everyone is convinced. “I think if you split object recognition and face recognition apart, you lose something,” says Isabel Gauthier of Vanderbilt University. Some areas of the brain indeed respond highly to faces, she says. But those same areas are also active, for example, when a car expert processes cars or when a bird expert watches birds.

“I happen to not think faces are special for any innate reason,” Gauthier says. “I think [face processing] is something we learn. There’s a good chance it happens through experience.”

In 1997, as part of her dissertation, Gauthier created objects called “greebles.” She wanted to know if the brain would respond the same way to these abstractions as it did to faces. These novel objects are pointy, faceless figures that appear plucked from the mind of Picasso. After seven hours of studying them, however, subjects became expert enough to learn their names and notice distinguishing characteristics. In other words, subjects treated the greebles as they treat the different hairlines, eye levels, and nose lengths that make human faces unique.

Gauthier used functional imaging to track activity in the fusiform face area before and after subjects became familiar with greebles. Sure enough, as greeble training increased, so did brain response in this region. “A lot of people have come up with hallmarks of face processing,” she says. “For me, these effects are doors into understanding expertise.”

Some behavioral scientists have expressed skepticism over Gauthier’s greeble work. (One of them, Nancy Kanwisher of the Massachusetts Institute of Technology, a leading voice for the specialized theory, was unable to comment for this story.) Gauthier points to some of her more recent work for further evidence that object and face perception might not be mutually exclusive. When car experts were asked to process faces and automobiles simultaneously, Gauthier found competition in the fusiform face area, she reported in a 2005 issue of Current Directions in Psychological Science. As car expertise increased, so did this neural interference.

“This tells you that whatever is special with faces,” Gauthier says, “it’s not unique.”

For Autistic Children, a Face Lift
Recall, for a moment, the motorcycle victim mentioned earlier. After the accident, Russell Bauer showed the man faces of relatives or celebrities, and the victim could not identify the person, Bauer reported in Neuropsychologia in 1984. When given multiple-choice options, Bauer found, the victim performed at chance.

Oddly, though, skin conductance tests indicated a greater, subliminal recognition going on when the victim saw faces that he would have known before the accident. Something in the brain knew that a familiar face, even by no name at all, should smell as sweet.

Subsequent studies have confirmed this paradox: people with prosopagnosia can’t recognize a face, but their fusiform face area is active when they look at one. The situation gets stranger, though, when considering people with autism. They’re by no means prosopagnosic. Still, many of them have difficulty processing faces. What’s more, when people with autism do look at faces, they don’t show activity in the fusiform gyrus.

Exactly why this occurs is unclear, says Jim Tanaka of the University of Victoria in Canada. One plausible explanation fits in well with the expertise theory of face perception. The trouble that autistic patients have with face processing might stem from their social attention deficits, Tanaka says. “They won’t respond to faces that typically developing children do, so if they’re not looking at faces, they’re unlikely to develop the expertise most of us have.”

To address the problem, Tanaka and a group led by neuropsychologist Robert Schultz recently created a computer activity called Let’s Face It. Children with autism tend to process faces by their individual parts, Tanaka says. They might see the eyes, nose, and mouth, for example, as separate and perhaps isolated features. Let’s Face It aims to improve holistic facial perception.

Using a computer, children search for several faces hidden within a landscape scene. The faces are hairless and often appear blurred or even upside down. As the levels progress, the faces become harder to distinguish, often merging almost seamlessly into a waterfall, hillside, or even the ear of a tiger. In addition to the computer program, Let’s Face It includes several other activities, such as building a face from individual parts.

In a five-year study, which concluded in August, the researchers administered Let’s Face It activities to children with autism for two months. Other children received no other therapy. Tanaka did not want to go into details because the paper is currently being written, but he did say the children who engaged in the activities seemed to improve their face perception, “mostly in holistic processing.”

All Shook Up
This article began with the story of a 39-year-old who lost the ability to recognize faces, even those of celebrities, after a motorcycle accident. It picks up now with another 39-year-old. When shown a face of Elvis Presley, this woman identified The King as none other than Brooke Shields. Is she just a huge “Suddenly Susan” fan who cares not for “Heartbreak Hotel”? That’s unknown (and, of course, unlikely). What is known is that this woman’s prosopagnosia did not start after a brain injury. Her condition occurred naturally.

Once considered rare, congenital prosopagnosia is now thought to impact as much as 2 percent of the population. Recently, a group of researchers studied the aforementioned woman and nine of her relatives—many of whom had reported difficulty recognizing faces. The family members performed poorly on tasks involving face memory and judgment of facial similarity, the authors wrote in the June 2007 issue of Cognitive Neuropsychology. (Given the field’s debate, it should be noted that the family members also had difficulty on some object recognition tasks.)

The genetic aspects of prosopagnosia and face perception, though still poorly understood, are gaining attention among researchers. In a paper in press for the upcoming Journal of Neuropsychology special issue, a group of Australian cognitive scientists, led by Laura Schmalzl, studied 13 members of a family, ranging in age from 4 to 87, and found a “wide spectrum of face processing impairments.” They contend that genetic prosopagnosia “is not a single trait but a cluster of related subtypes,” and that this disease profile is identifiable at an early age.

“We’re becoming familiar with the idea that these disorders can occur congenitally rather than in an acquired fashion,” Bauer says. Prosopagnosia is “becoming increasingly recognized as something that doesn’t just happen once every billion years.”

Still, he says, many important questions remain, including why prosopagnosics can’t fully rehabilitate their facial perception. Bauer says he is not aware of a single case of complete recovery by a prosopagnosic. “That tells me we don’t have a handle on the mechanism,” he says.

In this sense, at least, the motorcycle victim is not unique. As of seven or eight years ago, when Bauer stopped following the case, the man had failed to regain any faculties of face processing. Like Oliver Sacks’ patient, the motorcycle victim found some solace in music, but the pain of his impairment also led to drinking problems. “Last I heard,” Bauer says, “he was continuing to cope, but had not had any recovery at all.”


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.