From the origins of rhythmic sense, to the role of cognitive processes involved in hearing and making of areas are investigating a most beautiful and basic genetics in musical aptitude, to the music, psychologists from a variety form of human expression.
In the early days of Diana Deutsch’s work, some threeand- a-half decades ago, musical notes may as well have been a foreign language to her colleagues. “When I gave talks and I would put up a slide of just simple music notation, the audience would sort of gasp,” remembers the composer, pianist, professor of psychology at the University of California, San Diego, and founder of the journal Music Perception.
Back then, Deutsch says, not only was musicpsychological theory in its infancy, so was the technology to help it grow up. It was hard to produce precise, versatile sounds for music experiments. Now with advanced software, recording devices, computers, music synthesizers, and other hardware, she says, scientists can generate, modify, and analyze just about any sound.
In the last three decades, according to Deutsch, not only has the field of musical psychology matured, it has diversified. So much so, that a single direction of new study is nearly impossible to identify. There is no one question of the moment: the origins of rhythmic sense; the effect of music lessons on other cognitive abilities; the mental processes a musician engages before each sound sounds; the role of genetics in musical aptitude; how hearing music affects mood; how well, months later, musicians can recognize computerized versions of their own playing compared to those of other musicians. New research questions are constantly evolving.
Nonetheless, Deutsch says that one thing is clear: Research on music is relying increasingly on other fields of psychology. Likewise, researchers in other psychology fields, such as language, are increasingly realizing the value of music research.
“Take, for example, the general question of modularity in the processing of information,” she says. How we compartmentalize information, “has been addressed intensively with respect to speech and music. The view that had been prevalent during most of the last century was that whereas speech [involves] one set of brain mechanisms, music [involves] a different set, often claimed to be in the opposite hemisphere.” But according to Deutsch, recent studies in perception and cognition have demonstrated that some language and music functions have separate neural circuitry while other functions share circuitry. Such studies are invaluable for researchers — not necessarily just in music — who are interested in the modularity of brain functions.
Some of Deutsch’s recent work attempts to find the origins of absolute pitch, sometimes called “perfect pitch.” Absolute pitch is the ability to hear a tone — whether from a musical instrument like a piano or cello, a manmade device like the horn of a passing train, or a natural phenomenon like the wind blowing though the trees — and identify it by name — say, “A two octaves above middle C.”
The relationship between early music training and perfect pitch is strong. Musicians who started music lessons before the age of six are far more likely to have absolute pitch (as does Deutsch, by the way) than are musicians who start their training later. But Asian musicians, even those who started their musical training late, also are more likely to have absolute pitch. The question, she says, is whether this ability is genetic or learned. Many Asian languages depend on pitch information for meaning. Perhaps just learning language is enough to prime the absolute-pitch pump. To find out, Deutsch has tested the pitch memory of native speakers of Mandarin and Vietnamese; she found that they do indeed remember pitches better than non-pitch-language speakers.
Similarly, Erin Hannon, an assistant psychology professor at Harvard University, has investigated how people from different cultures organize rhythm. For Deutsch, this meant finding foreigners — in this case, Macedonians and Bulgarians living in Toronto, where Hannon was serving as a post-doc. Most Western music is organized in rhythmic groups of twos and threes and their multiples. A waltz, for example, is grouped in threes (ONE-two-three, ONE-twothree). But music in parts of Eastern Europe is often organized in groups of fives or sevens (ONE-two-three-four-five; ONEtwo- three-four-five-six-seven) and sometimes long sequences that include twos, threes, fives, or sevens. Hannon found that to easily sense these uneven rhythms, most people needed to have grown up with them.
But at what point do people learn — or lose — the ability to sense these exotic rhythms? “We showed that the ability to discriminate rhythms that are Balkan declines by 12-months of age,” Hannon says. For a follow-up study, “we exposed 12-month-olds and adults to recordings of [complex-rhythm] folk music at home and then we bring them back and test them again. We find that the infants perform at the same level for Balkan music as they did for Western rhythms whereas the adults barely learn anything. So the implication is that when you learn this stuff as a baby you’re going to learn much more rapidly and have a much easier time of it than when you’re an adult.”
“I’m not saying I think that all aspects of rhythm are learned and that infants can process all of them,” Hannon explains. “What we’re saying is that for some sorts of things you learn to ignore information that you might have paid attention to if you had grown up listening to certain kinds of music.” The underlying mechanisms for these lost abilities may, Hannon says, explain why adults have so much trouble losing their native accents when they learn a new language and why learning new languages at all is so difficult past childhood.
Like Hannon, Bruno
Like Hannon, Bruno Repp, a senior scientist at Yale’s Haskins Laboratories, is interested in the intersection of music and language. He has specialized in both. His early career was spent studying speech perception, but in the mid- 1980s he switched to music perception and performance. Also like Hannon, Repp is interested in how people process rhythmic material. But his research subjects are accomplished musicians.
Interestingly, Repp’s research shows that these musically-accomplished subjects too are stumped by rare rhythms. When professionally-trained participants or whom conventional Western rhythmic patterns is the usual musical language, are asked to tap out uneven (or “compound”) rhythms, Repp says, “they cannot not tap these intervals precisely; they really can’t.” He thought it might help if he provided more visual information, but when the musicians attempted to synchronize with a visual rhythmic map, “they were exactly as inaccurate at producing these intervals as they were on their own, without synchronizing. So having that written template to synchronize with didn’t help at all,” Repp says.
It’s likely that such rhythmic tendencies are part of what forms a musician’s sense of who they are, at least in terms of their musical performance. In one study, Repp had pianists play a piece of music on an electronic keyboard. Months later the musicians returned and listened to a sampling of performances of the same piece by themselves and by others. Even though many musical aspects of the performances were removed electronically, leaving not much more than phrasing, the tendency was for musicians to recognize their own playing.
Some of Repp’s recent research has continued this work. “We had pianists record one part of a piano duet, a fourhand piece, in the lab,” he explains. Then several months later the musicians returned to record the remaining half of the duet as a computer played back either the material they had recorded earlier or material a different pianist had recorded. “Our prediction, based on similar arguments to the self-recognition experiment, was that the pianists would be more accurate synchronizing with themselves than with other pianists. That was indeed the case.” This implies, he says, that people are better able to predict their own actions than those of others. “In order to be able to do that, some sort of internal synchronization is required.”
Measure for Measure
Caroline Palmer focuses on a different sort of synchronization in her research. Palmer, a professor of psychology at McGill University in Montreal, Canada, investigates sequence production, a measure of the relationship between mental preparation to play music and the actual production of sound.
“You might have to remember what to do next before you can think about how to do it,” she says. However, anyone who has watched an accomplished musician sight-read music, she says, will understand that these kinds of reactions take place in milliseconds. This sequence is defined as cascade models and stage models of memory and motor control processes. “It suggests that they are linked, but they don’t occur at the same time.” For musicians especially, she says, the ability to perform quickly and accurately may reflect a sequence in which knowing “what” to do next precedes knowing “how” to do it.
One hotbed of work within music and neurology is the lab of Palmer’s colleague, Robert Zatorre, a professor of neurology and neurosurgery at McGill. In the 1970s, advances in imaging technologies, most significantly MRI and PET scanners, enabled researchers to test what were then new ideas of the biological foundations of music perception. “Now we have the right technical equipment to look at brain function and we have the right types of experimental techniques and paradigms to apply specifically to music,” Zatorre explains.
Like many other music researchers, Zatorre is interested in what music processing reveals about language and hearing. “Looking at the intersection between music and speech is a good thing because each one informs us about the other,” Zatorre says, “and both together give us a much more complete picture than if we only study one. So even for people who say ‘well, I’m not interested in music,’ I would say to them, ‘okay, but you’ll still learn something that goes beyond just music. You’ll learn more general concepts about the functional properties of this particular [auditory] system.”
Until the late 1990s, some of the MRI’s drawbacks kept Zatorre from embracing the technology. One of the challenges, he says, is that MRI scanners make so much noise, a critical flaw that not only disturbs the patient but also disrupts the very process under study. Zatorre is now using a strategy of cycling the MRI machine off, letting the subject settle for a few seconds, applying a stimulus, and then activating the scanner while the subject is still responding to the stimulus. He hopes that tools such as these, combined with the work of others, will help scientists unlock such secrets as the origins of perfect pitch and the very roots of musical ability.
“Why is it that some people appear to be more able to learn music or seem to have a knack for music?” asks Zatorre. “What does it mean? There must be something in their brain that explains it.”
Leave a comment below and continue the conversation.