William James could not have envisioned a medium as powerful and pervasive as the Internet. What he did seem to know, with remarkable clarity for his time, was how information technology might influence cognition and behavior. In his 1890 masterpiece The Principles of Psychology, James recognized that our nervous tissue possessed an “extraordinary degree of plasticity” — meaning external stimuli can alter the very structure of the brain. When “outward agents” flood our sensory corridors and reach the brain, they leave “paths which do not easily disappear,” James wrote. In line with James’s prediction, recent studies have shown that the cognitive profile of computer users differs from those who don’t boot up.
James knew the drawback of scattered attention, too. Long before people could browse the Web while instant-messaging a friend between answering emails, James understood the perils of handling too many cognitive tasks at once. The number of “processes of conception” people can engage in at a single time is “not easily more than one, unless the processes are very habitual,” he wrote in Principles of Psychology. Indeed, basic behavioral research confirms that multi-tasking carries a great cognitive cost, and studies in realistic settings have caught those limitations in action.
And with the advent of Google still a century away, James understood that the memorization of facts has its natural limits. Being able to recall knowledge on demand is great, he said in Talks to Teachers on Psychology, but most education consists of learning “where we may go to recover it.” In fact, said James, what distinguished lawyers from laymen was not so much the information stored in their heads, but the ability to locate it externally in a brief amount of time. Sure enough, new behavioral science suggests that people are great at remembering where to access information on the computer, even when the fact itself eludes them.
In short, William James knew yesterday what a growing body of psychological research continues to reveal today: that technology can change our brains, and with it, our behavior.
Inside the iBrain
Since the days of William James, neuroscientists have confirmed the existence of neural plasticity. The very act of processing external stimuli adjusts our internal circuitry. Because these adjustments increase with exposure, and because we’re exposed to the Internet each passing day, digital technology stands to impact cognition unlike any other “outward agents” that have come before it. “One thing is very clear,” writes Nicholas Carr in The Shallows: What the Internet Is Doing to Our Brains: “if, knowing what we know today about the brain’s plasticity, you were to set out to invent a medium that would rewire our mental circuits as quickly and thoroughly as possible, you would probably end up designing something that looks and works a lot like the Internet.”
To see just how our wires are rewiring us, a group of four neuroscientists at University of California, Los Angeles (UCLA) recently recruited 24 people, ranging in age from 55 to 76, to undergo brain imaging while they engaged in an Internet task. Half of the participants were considered Net Naïve, meaning they went online just once or twice a week, and half were Net Savvy, meaning they went online at least once a day. All the participants had their brains scanned during two tests: a traditional reading condition, in which they read text presented in the format of a book, and an Internet condition, in which they performed a Web search then read content displayed on a simulated Web page.
On the traditional reading task, the Naïve and Savvy groups demonstrated more or less the same brain activity, as one would expect. Each group used regions of the brain connected to language, memory, and of course reading. During the Web task, however, the neural activity between the two groups differed strikingly. When Naïve participants examined a Web page, they used the same brain regions as during traditional reading. When the Savvy group used the Internet, a number of additional brain regions were activated — including those linked to decision making and complex reasoning. The Savvy group demonstrated twice as much neural activity as the Naïve group: 21,782 voxels of the brain scan to 8,646, for those keeping score at home.
In other words, the experience of browsing and reading the Web makes brains better adapted to browsing and reading the Web. The study, which was published in a 2009 issue of the American Journal of Geriatric Psychiatry, is considered the first to observe brain functioning during Internet use. In a 2008 issue of Scientific American, the study’s lead author, Gary Small, reported that after just five days of Web training after the initial experiment, Naïve brains began to work like Savvy ones — suggesting that neural plasticity is remarkably swift.
“We develop a better ability to sift through large amounts of information rapidly and decide what’s important and what isn’t — our mental filters basically learn how to shift into overdrive,” wrote Small, who is also the author of iBrain: Surviving the Technological Alteration of the Modern Mind. “Initially the daily blitz that bombards us can create a form of attention deficit, but our brains are able to adapt in a way that promotes rapid processing.”
The results of the UCLA test suggest that using digital technology might keep a mind active and limber, particularly as a person grows older. Psychologists Patricia Tun and Margie Lachman of Brandeis University recently analyzed a national survey of computer use and cognition in more than 2,600 people aged 32 to 84. Participants in the survey completed a brief cognitive test and a brief task-switching test administered over the phone. Tun and Lachman found a strong association between cognitive performance and frequency of computer use. Even after the researchers controlled for factors like basic intelligence, regular computer users still demonstrated higher executive functioning on the task-switching test, Tun and Lachman reported in a 2010 issue of Psychology and Aging.
“Neuro-plasticity is across the lifespan,” says Tun. “What we’re doing is affecting our brains, and there’s a lot more opportunity for rewiring and making neural pathways than we used to believe. The kinds of things the computer engages can be particularly good exercise, you could say, for some of these abilities that start to fail with age.”
The Multi-Tasking Mind
As beneficial as computer use might be to the senescent mind, it may prove equally destructive to the developing one. While a 50-year-old is considered Net Savvy by going online everyday, a young adult earns Internet stripes by handling all sorts of digital tasks at once: browsing, emailing, instant messaging, texting. Carr likens being online these days to “reading a book while doing a crossword puzzle.” In short, multi-tasking is the new norm.
A few years back, a group of psychologists at UCLA, led by Karin Foerde, designed an experiment to determine whether or not multi-tasking impairs learning. The researchers trained 14 participants to perform a single task — in this case, predicting the weather based on certain cues — and scanned their brains as they did it. To complicate matters, Foerde and company then asked participants to handle a secondary task at the same time: While continuing to forecast the weather, participants also heard a series of auditory tones and had to keep count of only the high-pitched ones.
Participants handled the multiple tasks successfully, but not without paying a cognitive price. While performing the weather task alone, participants used a region of the brain associated with declarative learning — a dynamic type of learning that enables a person to apply knowledge gained to other situations later on. When participants did both tasks at once, however, they activated a part of their brain linked with habit learning — a far less flexible form of learning that requires little attention or effort.
The results suggest that when we do two things at once, our brain conserves some strength by shutting down the advanced learning centers and reverting to the basic ones. In multi-tasking situations, “even if distraction does not decrease the overall level of learning, it can result in the acquisition of knowledge that can be applied less flexibly in new situations,” the authors conclude in a 2006 issue of Proceedings of the National Academy of Sciences. So the types of regular distractions we encounter in the digital age don’t make us learn less; they just make us learn worse. As William James knew, we can’t easily do more than one thing at once, “unless the processes are very habitual.”
Still the possibility remained that as multi-tasking becomes routine the brain gets better at handling several things at once. As a follow-up to the work of Foerde’s team, a group of Stanford researchers that included psychologist Anthony Wagner gathered a group of participants they identified as either heavy or light multi-taskers. They then administered a series of cognitive tests, each designed to measure some aspect of distractibility, to see which group handled the load better.
The results came as something of a surprise. Compared to light multi-taskers, the heavies did a worse job filtering out irrelevant distractions, had a harder time ignoring irrelevant memories, and took a longer time switching from one task to another. What made the findings more striking was the fact that the two groups performed the same on tasks without any distractions. On the whole, the findings suggest that heavy multi-taskers “may be sacrificing performance on the primary task to let in other sources of information,” Wagner and colleagues reported in a 2009 issue of Proceedings of the National Academy of Sciences.
The findings don’t bode well for the wired generation. The barrage of new media distractions is “placing new demands on cognitive processing, and especially on attention allocation,” the researchers write. While cause-and-effect is difficult to parse here, in some sense it doesn’t matter. If all this digital media is causing people to multi-task more frequently, then their learning ability will suffer. But if only certain people are attracted to the heavy multi-tasking lifestyle, then those people will still have a hard time coping in an environment that’s only poised to get more distracting with time.
Trouble with attention in the lab is one thing. Trouble in the classroom is quite another. To investigate whether or not the problem transferred to a realistic setting, a research team that included psychologists Laura Bowman, Laura Levine, and Bradley Waite of Central Connecticut State University recently asked a group of 89 students to read a lengthy textbook passage on a computer. Some of the students simply read the text; others responded to instant messages before reading the passage; a third group was interrupted by an occasional instant message. All three groups were given a test of the material once they finished their reading.
The results were almost exactly as Foerde’s brain imaging study would predict. While all three groups achieved similar scores on the test, the group that responded to instant messages while reading took significantly longer to finish the passage. (The researchers suspect the students made up for the distractions by re-reading passages that were interrupted by the instant messages.) Even when the time it took to read and respond to the message was subtracted from the total, these students spent 22 to 59 percent more time reading than the other groups did, Bowman and colleagues report in a 2010 issue of the journal Computers and Education. Students might think they’re saving time by being online while studying; in fact, they’re making their own lives harder.
“I don’t think the majority of students, on their own, will recognize that multitasking slows their productivity,” says Bowman. “Since we only have so much time in the day, I’d predict that devotion to studying, homework, and academic activities will be short-changed. […] This means that the academic activity will receive less focused time, resulting in perhaps more cursory processing of the information, or more shoddy outcomes.”
However much the distraction of the Internet may interrupt the learning process, it also stands to aid our access to knowledge. When it came to learning information, after all, William James made little distinction between knowing a fact by memory and simply knowing where to find it. Not everyone shares that belief; in Plato’s Phaedrus, Socrates believes the invention of another communication technology — in this case, writing—will cause people to “trust to the external written characters and not remember of themselves.” But some wise company certainly does: “Never memorize what you can look up in books,” said none other than Einstein, in 1922.
Whether you side with Socrates or Einstein, it’s hard to deny that the Internet will have some impact on our memories. Search engines like Google and information warehouses like Wikipedia promise to turn the Web into something like a personal external hard drive for us all. In the same way that social groups form what psychologists call “transactive memory” — a collective store of information that anyone in the group can access — the Internet might carry around much of the knowledge we might otherwise have stored ourselves.
Psychologists Betsy Sparrow of Columbia, Jenny Liu of the University of Wisconsin-Madison, and Daniel Wegner of Harvard recently designed a series of four experiments to study what access to search engines may be doing to our memories. In the first test they found that difficult trivia questions — ones they might typically send us racing toward Google — caused people to think of computers. After considering the tough question, participants took longer to name the color of a computer-related word than they did to name a general word during a Stroop task. The test suggests that when “faced with a gap in our knowledge, we are primed to turn to the computer to rectify the situation,” the authors wrote in a report published online in Science last July.
In two subsequent experiments, participants demonstrated their reliance on computer memory more directly. For one test, Sparrow and colleagues had participants enter 40 facts into a computer then asked them to recall the information later. Those who had been led to believe the computer would save the item recalled significantly fewer items than those who thought the computer would delete them. For the other test, the researchers repeated the entry task but divided the participants into three groups. Some believed the computer would save the entry; some saw the exact name of the folder where the entry would be saved; and some believed the entry had been erased. Once again, the participants who recalled the most facts were those who believed the information would be deleted. “When we need it,” the researchers wrote, “we will look it up.”
In a final experiment, all participants typed trivia into a computer and were told it would be saved in a specific folder. Afterward, the researchers asked them to recall as many facts as they could, as well as the folder where the fact had been saved. True to William James’s prediction, participants remembered where the information had been stored more than the information itself — a “remarkable” finding, the authors write, considering that folder names had been displayed without any particular fanfare.
“The accessibility of external memory is much more extensive than it ever has been in the past,” says Sparrow, pointing to the ubiquity of smart phones, tablets, laptops, and the like. “I think it makes a lot of sense to offload a lot of the memorization component, if we can.”
So did William James — go online and look it up.