The human brain is not detail-oriented when it comes to hearing, opting for the big picture instead, according to new research published in Psychological Science, a journal of the Association for Psychological Science.
Researchers at the University of California, Berkeley found that when faced with many different sounds, such as notes in a violin melody, the brain doesn’t process every individual pitch, but instead quickly summarizes them to get an overall gist of what is being heard.
The findings could potentially improve the design of hearing aids to help people tune into one conversation when multiple people are talking in the background, something people with normal hearing do effortlessly. Also, if speech recognition software programs could emulate the information compression that takes place in the human brain, they could represent a speaker’s words with less processing power and memory.
In the study, participants could accurately judge the average pitch of a brief sequence of tones. Surprisingly, however, they had difficulty recalling information about individual tones within the sequence, such as when in the sequence they had occurred.
“This research suggests that the brain automatically transforms a set of sounds into a more concise summary statistic — in this case, the average pitch,” said study lead author Elise Piazza, a UC Berkeley doctoral student in the Vision Science program. “This transformation is a more efficient strategy for representing information about complex auditory sequences than remembering the pitch of each individual component of those sequences.”
Co-authors on this study, all at UC Berkeley, include Timothy Sweeny, postdoctoral researcher in psychology; David Wessel, professor of music; Michael Silver, associate professor of optometry and neuroscience; and David Whitney, associate professor of psychology.
This research was supported by the Department of Defense through a National Defense Science and Engineering Graduate Fellowship awarded to E. A. Piazza, by National Science Foundation Grant 1245461 to D. Whitney, and by National Eye Institute Core Grant EY003176.