The New Yorker:
In the early nineteen-nineties, David Poeppel, then a graduate student at M.I.T. (and a classmate of mine)—discovered an astonishing thing. He was studying the neurophysiological basis of speech perception, and a new technique had just come into vogue, called positron emission tomography (PET). About half a dozen PET studies of speech perception had been published, all in top journals, and David tried to synthesize them, essentially by comparing which parts of the brain were said to be active during the processing of speech in each of the studies. What he found, shockingly, was that there was virtually no agreement. Every new study had published with great fanfare, but collectively they were so inconsistent they seemed to add up to nothing. It was like six different witnesses describing a crime in six different ways.
This was terrible news for neuroscience—if six studies led to six different answers, why should anybody believe anything that neuroscientists had to say? Much hand-wringing followed. Was it because PET, which involves injecting a radioactive tracer into the brain, was unreliable? Were the studies themselves somehow sloppy? Nobody seemed to know.
And then, surprisingly, the field prospered. Brain imaging became more, not less, popular. The technique of PET was replaced with the more flexible technique of functional magnetic resonance imaging (fMRI), which allowed scientists to study people’s brains without the use of the risky radioactive tracers, and to conduct longer studies that collected more data and yielded more reliable results. Experimental methods gradually become more careful. As fMRI machines become more widely available, and methods became more standardized and refined, researchers finally started to find a degree of consensus between labs.
Read the whole story: The New Yorker
Leave a comment below and continue the conversation.