Whether and to what degree discoveries made in the lab generalize to the real world has been a long-standing debate among researchers of all stripes. New advances in technology and methodologies are enabling psychological scientists to bridge this divide and bring the controlled assessment of the lab into the world at large. Five researchers working in a variety of areas came together at the 2017 International Convention of Psychological Science in Vienna, to discuss the ways in which they balance, combine, and synergize the confines of the lab with the complex reality of our world.
Gesturing Toward Language
APS Past President Susan Goldin-Meadow of th University of Chicago uses a combination of lab-based and real-world environments to examine another aspect of infant development: gesture and its relation to language acquisition. It may seem intuitive that children use gestures as a stand-in for words they don’t know or can’t yet say, but Goldin-Meadow has found the movements are more than that. Pointing gestures not only function like words in children’s speech, but may actually be part of the word-learning process.
Goldin-Meadow first found indications of this by comparing the spontaneous gesture production of typically developing 14-month-old children with their vocabulary at 54 months. In addition to a correlation between gesture and vocabulary, she also found that the well-established positive association between socioeconomic status and child vocabulary size can be partially mediated by gesture production at 14 months.
To go deeper into this relationship and investigate the possible causal role of gesture, experimenters manipulated gesture production during a series of experimental sessions in the children’s homes. Experimenters did not use gesture in their sessions, used gesture but did not instruct the child to do so, or gestured and encouraged the child to do the same. Children who were told to gesture used more words in a follow-up assessment than did those who had only witnessed the experimenter gesture or who saw no gestures at all. They also produced more gestures with their parents outside of the experimental session. Because the experimenters manipulated gesture, the findings provide convincing evidence that gesturing can play a causal role in word learning.
In a series of lab-based studies (which will be followed up by neuroimaging studies), Goldin-Meadow also found that performing a gesture of an action (e.g., miming the turning of a knob) helps children better generalize that word to other knob-turning situations than does simply turning the knob themselves.
These observations about children’s use of gesturing and language in the real world, supported by laboratory testing, demonstrate how these two settings can work synergistically to provide us with new insights into development.
Just Moving to Move
APS Fellow Karen Adolph, a professor of psychology at New York University, aims to capture the complexity of infant learning beyond what has been observed in the lab. For example, technological advances such as head-mounted eye tracking for mobile infants have revealed that they don’t attend to their caregivers’ faces as much as previously believed — in one study, Adolph found that infants spent about 16% of the time looking at their parents and only 5% of the time focused specifically on parents’ faces.
Lab setups to study infant walking typically involve getting subjects to walk in a continuous, forward, straight path through a designated recording area. But infant ambulation in the real world is often far from a continuous, forward, straight path — babies stop and start, walk in every direction, and move in curves. These components were previously not able to be studied in the lab-based paradigm, but a larger recording area with a pressure-sensitive floor has enabled researchers to track how babies walk freely around a room.
Technological advances have allowed Adolph to investigate not just how babies walk, but why. Young children were typically theorized to use walking to get to a destination that they can see but that is not reachable from their current position. Using the same eye-tracking technology as the previous experiment, Adolph found that these destination-based bouts of walking account for only about 18% of toddlers’ movement. Sometimes they look toward one destination but then walk to another in what Adolph terms “discovery bouts,” accounting for about 10% of their walking trips. Babies, it turns out, are largely wanderers; most bouts do not have a destination at all.
“They are just moving to move,” says Adolph.
While lab tasks have the advantage of being well-controlled, Adolph’s findings reveal how they fail to capture the full picture.
“The cost of over-simplifying behaviors is that we lose sight of the phenomenon that we want to study,” she said. “In developmental psychology, over-reliance on laboratory tasks has led us to develop erroneous and superfluous theories about infant development.”
Adolph hopes to correct this trend moving forward.
Attention in Detail
To draw clear conclusions about the complex construct of attention, lab-based studies have typically separated out two main aspects of attentional shifting. These shifts can be driven by endogenous signals, which are voluntary and strategic (e.g., searching for a specific target), or exogenous signals, where attention is automatically shifted in response to an external stimulus — a more reflexive reaction.
Voluntary attentional shifting has been linked to the dorsal frontoparietal (dFP) region of the brain, while the ventral frontoparietal (vFP) area has been thought to primarily mediate reflexive shifts, though more recent evidence suggests the two systems may also interact and overlap with each other.
The lab studies that have provided these insights, however, use primarily stereotyped paradigms and simple, repeated stimuli, a far cry from the cohesive, complex visuospatial landscape we encounter in our everyday lives. Recent technological advances in eye tracking, saliency maps, and imaging methods have enabled researchers like Emiliano Macaluso, a professor at the Lyon Neuroscience Research Center, to study attention in a more naturalistic setting, where stimuli are more numerous and dynamic and where there is not always an explicit task to be done or goal to be achieved.
Macaluso has used dynamic visual environments, including first-person perspective videos and virtual environment setups, to examine how attentional shift is mediated in the brain under different conditions and to compare the responses of healthy subjects with those of subjects with lesions in the vFP region. Throughout these studies, he also has mapped and compared brain activation in individuals when viewing task-relevant objects and objects that were relevant only to a previous task, as well as when shifting their attention toward salient stimuli. These activation patterns were largely segregated between the ventral and dorsal networks, but also revealed greater nuance in how the brain controls attention, as the vFP was found to play a role in orienting attention toward task-relevant objects, and lesions in the ventral region interfered with this processing. Conversely, the dFP was activated during orientation toward salient events and locations.
Macaluso hopes to continue using these naturalistic settings to gain even greater insight into how the brain mediates the processes of attentional control.
Language Richness and Reward
Language and communication do not exist in a vacuum; they are constrained by both lower-level sensorimotor processes as well as higher-level social factors. Rick Dale, a cognitive scientist at the University of California, Los Angeles, has examined these dynamics in the lab. In one experiment, he and collaborators tracked the eye movements of two people looking at a shared screen while one spoke and the other listened. They found that when speakers’ and listeners’ eye movements were more closely aligned, the listener performed better on a subsequent comprehension test.
“Language and communication are actively entangled in these perceptual motor processes,” Dale said.
Thanks to the internet, large natural datasets are more readily available than ever, allowing Dale to investigate the social side of these constraints in the real world. Using Yelp Dataset, a natural dataset of reviews and tips written by hundreds of thousands of users about more than 60,000 businesses, Dale was able to study the relationship between social connectedness and various facets of the language of these reviews. He found a pattern of “community innovation,” in which more interconnected networks of users tended to use richer language than did less connected ones, indicating there may be a social incentive to use more lexically rich language.
To examine these phenomena in more controlled environments, Dale and his student recruited internet-based and lab participants to produce language by typing it out for an extended time. Participants faced simple, familiar tasks such as summarizing plots of favorite movies or writing reviews (as in the Yelp dataset). He measured their typing speeds and textual richness. He found that typing speeds were higher for low-richness textual inputs than for highly rich ones.
“There’s an incentive needed to communicate a richer message, and social incentive can induce a participant to invest more cognitive effort,” Dale explained. He noted that integrating new natural data with traditional psychological research can unveil causal linkages between the cognitive and the social. “What’s left is to link those typing dynamics to the social structure, so that indeed you can overcome the cognitive cost of typing richer text when there’s a potential social incentive that’s conditioned by the social network that one’s in.”
Navigating New Technology
Smartphones have changed not only the ways in which we communicate with each other, but also how we interact with the world. Yvonne Rogers, a professor of Interaction Design at University College London, focuses on questions about how technology affects people’s lives, how people behave when encountering a new technology, and how technology can be leveraged to engage communities and to inform new understandings of behavior.
One target of Rogers’ investigation has been the effect of using a GPS-enabled mobile device versus a paper map for navigation. Lab studies have shown that people using paper maps take less time to get to their destination, better remember the route, and have a better mental model of the area traversed.
Rogers wondered, however, what people were doing with all the cognitive resources freed up by mobile navigation, which trades cognitively taxing decision-making for lower-effort instruction-following. In a task where subjects had to navigate a route through London with either a paper map or a GPS, she found that participants who used paper maps looked at them more frequently, especially before critical turns, while those using smartphones checked them less and usually after critical turns. Subjects using smartphones recalled more street views and drew more detailed maps of the route they took. Users of paper maps drew routes that were more spatially accurate, but had fewer subjective and emotional descriptions of their trip than did smartphone users.
Rogers said this suggests that smartphone users experience the environment differently and adopt different strategies for navigating.
“Our findings compared to the previous lab findings were quite different, but also more positive. What we’re saying is [smartphone navigation] is not all bad,” she said.
This research is illustrative of the many new avenues of study that technological advances have opened up.
“I think researchers have new opportunities to change what they do, and to ask different questions and to use different techniques and methodologies by which to explore them,” Rogers said.