Babies Don’t Learn Language Just By Listening, Janet F. Werker Explains
From the moment they’re born, infants possess a remarkable capacity to absorb language in all its complexity. They prefer to listen to speech versus other sounds, and to watch the faces of people as they talk, over anything else in their field of vision.
APS William James Fellow Janet F. Werker has spent her career studying how babies develop this capacity for language so quickly. Her work demonstrates that language acquisition is actually a multisensory process.
“Perception provides the point of entry into native language,” Werker said during her May 2019 award address at the APS Annual Convention in Washington, DC. “It’s the first kind of information that babies get about their native language, and it progressively builds on the organization that’s already in place. So we not only have an evolutionary capacity for acquiring human language, we also start experiencing it quite early.”
In fact, that experience begins before birth, says Werker, a professor at the University of British Columbia. Babies are first exposed to language in the uterus, where they pick up the rhythmic properties of their mothers’ speech.
“We now know as young as 28 weeks gestation there are particular circuits that are close to those used in the adult brain for discriminating speech-sound contrasts,” she says.
That primes infants for sensitivity to human speech as soon as they’re born, and as they mature, their attention to the languages they hear every day increases.
“At birth a baby is prepared to learn any of the world’s languages with already a little bit of specialization for the language heard in utero,” she said during her address. “By 5 or 6 months of age, they’re showing stronger preference and attunement—like for vowels, for the rhythmical properties—to the native language, and by 12 months they’ve become real experts at perceiving the native language.”
Infants’ capacity to learn the properties of other languages aren’t completely closed off by this period, Werker notes, but picking up on these distinctions will only become more difficult over time.
This process is manifested in part by newborns’ remarkable capacity for phonetic discernment, even when hearing speech that they’re not exposed to everyday, Werker says. In experiments that track where babies turn their heads when they detect a sound change, results have shown that children in their first months of life can detect subtle differences in consonant sounds—an ability that diminishes over time if they grow up in a monolingual household.
Werker’s lab has shown this in studies involving Hindi, which has a consonant system that is significantly richer than that of English. For example, an English d has just one sound, while the Hindi d has two—one produced with the tongue at the back of the teeth and the other with the tip of tongue at the roof of the mouth. As adults, native English-speakers have difficulty hearing that distinction, but 6- to 8-month old babies in English-speaking environments have no trouble discriminating between the two sounds. Babies developing in Hindi-speaking households maintain and even strengthen that ability as they grow, while native English-speaking children lose it.
Werker has also discovered a sensorimotor aspect of this ability, particularly at the point when babies start trying to imitate the speech that they hear. In a 2015 experiment, 6-month-old babies’ gazes were measured using eye-tracking technology as they listened to the two Hindi d sounds while they had teething toys in their months. One type of teether restricted the infants from moving their tongues. Babies sucking on those tongue-restricting toys were not able to distinguish the two Hindi d sounds, whereas those who were free to use their tongues could tell the difference. The findings suggest that oral motor movements play a role in speech perception.
Maturation also plays a crucial role. Werker and her colleagues have studied babies born up to 3 months premature who were exposed to unmuffled speech much earlier than babies born on or around their due date. They wondered whether, as a result of that early exposure, the premature infants’ acute sensitivity to speech-sound distinctions (e.g., Hindi vs. English consonant sounds) faded at an earlier developmental stage. Instead, they found that the infants born 3 months prematurely began attuning to the sound of their native language at around 12-15 months—the same gestational age as children who were carried to term but a later chronological age. This suggests that the effect of experience on phonetic discrimination is dependent on maturation rather than when infants start hearing unfiltered speech sounds.
Language perception seems to extend beyond sound to the visual cues of speech as well. Werker’s lab demonstrated this in an experiment involving three groups of infants (ages 4, 6, and 8 months) from monolingual English homes and two groups of infants (ages 6 and 8 months) from bilingual French-English homes. They showed each group silent video clips of three bilingual French-English speakers who recited sentences first in English or French and then switched to the other language.
The researchers found that 6-month-old babies from both bilingual French-English and monolingual English homes watched the video clips for a significantly longer period if the speaker switched languages, suggesting that they could distinguish between the languages visually. But by 8 months, only babies from a bilingual French-English home were able to tell the languages apart using visual cues.
In fact, living and working in Vancouver has given Werker ample opportunity to explore how cultural context shapes babies’ experiences with language. The Canadian city is home to a large Asian population, and many infants in that group grow up in bilingual environments. Experiments with infants in Vancouver have given Werker a window into the cultural cues that influence language perception and acquisition. In a recently published study, Werker and her colleague Lillian May played English-learning White infants sentences in both English and Cantonese and showed them pictures of White people or of people of Asian descent. They found that when the children were hearing Cantonese, they looked more at the Asian faces than when they were hearing English. But when they heard English, they looked at both White and Asian faces equally, indicating they already understood that only Asians were likely to speak both Cantonese and English.
Werker’s work has also discovered a link between specific in utero experiences and language perception. That research has centered on the role that maternal depression during pregnancy can play in the timing of an infant’s language development. The studies have followed pregnant women experiencing depression, some being treated with antidepressants (specifically serotonin reuptake inhibitors) and others receiving no pharmacological treatment, along with expectant mothers with no symptoms of depression. Werker and colleagues have examined indicators such as the heart rates of unborn babies when they’re exposed to language. They also examined the babies’ language development at 6 and 10 months of age, using methods ranging from eye tracking to fMRI scans. They discovered that the infants of mothers treated with antidepressants stopped discriminating the sounds and sights of their native language at a younger age than the babies of depressed mothers who had received no pharmacological treatment.
Werker’s lab is following up with many of these children to see whether these correlations have lasting consequences for their language development.
Werker notes that this research has significant implications for language learning in children with disabilities, including hearing and visual impairments that prevent them from being exposed to the cues that typically drive language growth.
Bruderer, A. G., Danielson, D.K., Kandhadai, P., & Werker,
J. F. (2015). Sensorimotor influences on speech perception in infancy. Proceedings of the National Academy of Sciences, 112(44) 13531-13536.
May, L., Baron, A. S., Werker, J. F. (2019). Who can speak that language? Eleven-mont-old infants have language-dependent expectations regarding speaker ethnicity. Developmental Psychobiology, 61(6):859-873.
Maye, J., Werker, J. F., Gerken, L. (2002). Infant sensitivity to distributional information can affect phonetic discrimination. Cognition, 82(3): B101-11.
Pena, M., Werker, J. F., & Dehaene-Lambertz, G. (2012). Earlier speech exposure does not accelerate speech acquisition. Journal of Neuroscience, 32(33), 11159-11163.
Weikum, W., Oberlander, T. F., Hensch, T. K., & Werker, J. F. (2012). Prenatal exposure to antidepressants and depressed maternal mood alter trajectory of infant speech perception. Proceedings of the National Academy of Sciences, 109 (Supplement 2) 17221-17227.
Werker, J. F., Yeung, H. H., & Yoshida, K. A. (2012). How do infants become experts at native-speech perception? Current Directions in Psychological Science, 21(4), 221–226.