Preschoolers’ Expectations Shape How They Interpret Speech

This is a photo of a girl "listening" to a tin can phone.

When we listen to people speak, we aren’t just hearing the sounds they’re making, we’re also actively trying to infer what they’re going to say. Someone might misspeak, forget a word, or be drowned out by background noise, and yet we often get their meaning anyway. This is because we use our past experience with language to hear what we expect them to say. Adults tend to manage this kind of “noisy channel” communication fairly easily, but new findings suggest 4- and 5-year-old children show the same adaptive ability.

The research is published in Psychological Science, a journal of the Association for Psychological Science.

“Children process language in a way that combines both the auditory signal that they hear and their expectations about what they are likely to hear, given what they know about the speaker,” says psychological scientist Daniel Yurovsky of the University of Chicago. “They are sensitive to how reliable the information sources are, and they can combine them in a way that respects this sensitivity.”

The idea that we integrate two sources of information – incoming perceptual data and expectations based on past experience – when we communicate with each other emerges from developments in machine learning.

“This framework—called the noisy-channel model—grew out of some foundational work in information theory, and now makes a big contribution to things like autocorrect and text-to-speech applications,” explains Yurovsky.

Yurovsky and colleagues Sarah Case and Michael C. Frank of Stanford University wanted to find out whether this noisy-channel model might also describe the way that children process language.

The researchers recruited 43 children (between 4 and 6 years old) and 50 adults to complete the same task. The participants saw pairs of pictures: in each pair, one picture showed a plausible scene and the other showed an implausible scene. At the same time, they heard a distorted recording, in which a speaker introduced as “Katie” described one of the pictures. The participants had to select which picture in each pair Katie was most likely describing.

For some participants, Katie described the plausible scene (e.g. “my cat has three little kittens”); for others, Katie described a similar but implausible scene (e.g. “my cat has three little hammers”).

In the second round of the task, the description implied by the two pictures was phonologically very similar, differing only by a single consonant or vowel sound (e.g., “I had carrots and peas for dinner” versus “I had carrots and bees for dinner”). In this round, Katie always referred to the implausible scene (“bees”).

The results showed that the preschoolers were able to incorporate what they had already learned about Katie in the first round when interpreting her description in the second round. If Katie typically described the plausible scene in the first round, they were more likely to think that she said “carrots and peas.”

But if Katie previously tended to describe the implausible scene, they wouldn’t “correct” her description in favor of the more logical picture – they assumed that she was referring to the implausible picture, however nonsensical it was.

“These findings show that children are not confined to trying to learn from the sounds they hear, but can use their expectations to try clean up some of the ambiguity in perceptual information using their expectations,” Yurovsky says.

In a follow-up experiment, the researchers varied the amount of noise in the room when Katie was talking. The noisy channel framework predicts that as speech becomes more difficult to hear—like on a poor cell phone connection—we should rely more on our expectations. And preschool-age children did exactly this: They adapted their responses to Katie’s ambiguous descriptions according to both their previous experience and the noise level in the room.

Overall, the fact that such expectations played such a strong role in the preschoolers’ decision making surprised the researchers:

“It’s pretty common in this kind of work to show that young children have some competence early, but usually if you compare them to adults you find that the effect is much larger in adults,” explains Yurovsky. “Not so here: At least by 5, and at least in this task, children adjust their expectations about what speakers are saying to the same degree as adults do.”

The researchers hope to conduct additional studies to investigate noisy-channel processing in younger children.

“We hope our ongoing research will help us to understand how children become an active part of the acquisition process–not just as perceivers of their input, but as contributors to it,” Yurovsky concludes.

This research was funded by National Research Service Award F32HD075577 from the National Institutes of Health to D. Yurovsky and by a John Merck Scholars Fellowship to M.C. Frank.

All data and materials have been made publicly available via the Open Science Framework and GitHub. The former can be accessed at https://osf.io/96cx7/, and the latter can be accessed at https://github.com/dyurovsky/noisy-kids and http://dyurovsky.github.io/noisy-kids/. The complete Open Practices Disclosure for this article can be found at http://pss.sagepub.com/content/by/supplemental-data. This article has received the badges for Open Data and Open Materials. More information about the Open Practices badges can be found at https://osf.io/tvyxz/wiki/1.%20View%20the%20Badges/ and http://pss.sagepub.com/content/25/1/3.full.


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.