Developmental scientists typically study the behavior of children through the lens of adult experience. But APS Fellow Linda B. Smith has taken a new approach to this line of inquiry by trying to see the world through children’s own eyes.
Smith, an Indiana University Bloomington (IU) psychological scientist renowned for her studies on the development of language and object recognition in infants and young children, was a keynote speaker at the 2017 International Convention of Psychological Science in Vienna. Her speech, “How Infants Break Into Language,” focused on the intersection of object identification and linguistic learning in children between the ages of 3 weeks and 24 months.
Smith has steadily pursued new ways of examining the infant brain and body, especially as they relate to learning both language acquisition and object cognition. Her current line of research explores the role of environment in young children’s growth processes, with special focus on pivotal developmental time periods and the mechanisms of change that play crucial roles during those periods.
“We do not yet have a theory or a computational understanding of the implications of the ordered sequence of experiences that babies create for themselves,” Smith said, noting that babies have simultaneously evolving developmental systems. “What the brain does determines what the body does, and what the body does changes the environment … these changes that we make in the world come back to the brain through the body.”
Smith has made it a priority to analyze the dynamics of the interactions between a child’s brain, body, and surrounding environment. These interactions, she says, can have tremendous effects on how kids learn to speak and to identify specific items in their fields of view, thereby shedding light on the developmental pathways of both linguistic development and object learning. To achieve this goal, she has conducted several studies examining babies wearing head cameras. The Home-View Project, an initiative developed with support from the National Science Foundation, thus far has gathered data from 75 children ranging in age from 3 weeks to 24 months, with 4 to 6 hours of head-camera video recordings for each child.
The Eyes Have It
A general rule of thumb when studying sensorimotor systems, Smith said, is that “when you have people moving in the world — be they babies or be they adults — they tend to view the world with heads and eyes aligned.” That is, when we see something we truly want to focus on (rather than just glance at), we turn our entire head in the direction of the object; this movement, Smith explained, takes approximately 500 ms. However, children at different ages go about this process in different ways. Three-week-old infants can see only what is held in front of them and therefore focus their gaze directly ahead, while 1-year-old toddlers are “driving new kinds of flow and optic information, and when that movement starts … that actually is driving very important changes in the visual system.”
A baby’s increased ability to move its head (and subsequently its entire body) results in a correspondingly increased visual field, Smith noted. Data from head cameras attached to infants showed that they viewed faces 15 minutes out of every hour — an extremely high proportion of the time they were awake. They also saw those faces at close ranges of approximately 2 ft., likely because parents were leaning in quite closely to look at their children at that age.
One-year-old children, however, saw faces only 6 minutes per hour and also viewed them from farther away, instead focusing more of their attention on hands and objects.
“It’s faces that decline with age, not people in view,” she explained. “When a 2-year-old is looking at somebody’s body in the natural viewing, it’s unlikely to be a face, but when a 3-month-old has a body in the view, it’s likely to be a face.”
This creates a systematic shift for which body part is most salient to a child’s physical learning experience because of the way kids perceive hand function at that age: Whether a child is an infant or a toddler, 70% of the time they spend looking at hands, those hands are holding objects.
Playing With Perception
To zoom in on this critical developmental period, Smith, in collaboration with her colleague APS Fellow Chen Yu, conducted a multisensory project that used head cameras (or head-mounted eye trackers for infants), motion sensors, audio recordings, and multiple room cameras. In the larger project, they have now recruited nearly 200 children from 9 to 36 months of age, as well as one parent of each child. By closely examining the interactions that arose during parent–infant play, specifically as it related to objects, Smith hoped to glean insights into the ways kids learned about language and object identification at different ages.
They found that in the first 2 years of a child’s life, objects came into and out of view rapidly and that one object usually was much closer to the child’s eyes than were others (suggesting that the parent had perhaps held it in front of the child’s face). Equally as important, the parent often named the object that was largest in the child’s view. For Smith, this begged the question: “Is this type of play an optimal moment for learning object names?”
Smith and Yu gave the parents six objects with specific names to use while playing with their children. They were not told to teach the children the names of the items; nor were they told the children would be tested after play (this assured the parents would not intentionally try to turn the session into a lesson). After the 1.5-minute session, the researchers measured the children’s knowledge of each object twice by presenting the child with three options and asking them to choose one. If a child chose the right object, Smith and her colleagues reexamined the dynamics of the play session, reviewing material from 10 seconds prior to the naming event to 10 seconds after the naming event.
They found that successful object recognition occurred when an object was physically close to, and centered on, a baby’s face. “Toddlers learn objects names when the referent is visually salient, bigger in the view, [and] more centered than the competitors,” Smith explained. “This is a direct consequence … of how toddlers’ bodies work.”
In addition, the experimenters discovered that naming moments were likely to happen when babies were holding the object themselves and when their heads were stable (i.e., focused on the object). “What all this means is that in the toddler, visual attention and learning involves the whole sensorimotor system,” Smith said. “It emerges in the real-time coupling and self-organization of head, eyes, and hands. At this point, learning object names is about the coordinated focus of eye, head, and hands, the stabilized visual attention that brings about, and the reduction of visual competition that holding an object brings about.”
Smith is encouraged by these consistent findings and believes they could be relevant for researchers seeking to delve more deeply into the intersection of language learning and object cognition of young children. She explained that each age provides novel insights into this process: When children are 18 months old, they learn things completely differently than they did when they were younger (e.g., they are coordinated enough to grasp and hold objects, thereby encouraging encourages parents to name them).
“Development also brings the accomplishments of the past forward,” Smith concluded. “What happened earlier will shape what happens later.”