Cover Story

Robots for Research

From R2-D2 to Astro Boy to WALL-E, science fiction is riddled with diminutive, scrappy robots and androids that serve as sidekicks, assistants, and even heroes. But in the real world, childlike robots are increasingly at the center of a unique form of integrative science, bringing together experts in robotics, neuroscience, linguistics, and child development.

The fruits of these collaborations are proving to be highly symbiotic. Developmental psychologists are able to build and test new theories simply by manipulating the neural pathways of robots; meanwhile, roboticists are tapping into child development research to create robots that can learn and make decisions. The result is the field of developmental robotics, also referred to as epigenetic robotics.

“Human infants and toddlers are just about the smartest learning devices known,” says APS Fellow Linda B. Smith, who has incorporated robotics into some of her recent research. “They can adapt to any culture, learn any language, become makers and builders — think [of] blocks!”

Developmental robotics wants to understand and then exploit what we know about human infants to build robots that, like those infants, aren’t programmed to do specific tasks,” but can learn and adapt to their surroundings.

This is a photo of the iCub robot.

Credits IIT

Developmental robotics research is flourishing particularly in Europe, and among its leading figures is psychological scientist Tony Prescott of the University of Sheffield, United Kingdom. Prescott directs Sheffield Robotics, a multidisciplinary research group.

Sheffield is among a small group of universities worldwide experimenting with iCub, an open-source humanoid robot test bed created by the Istituto Italiano di Tecnologia in Genova, Italy, for research in human cognition and artificial intelligence. Standing at about the height of an average 4-to-5-year-old child, iCub robots cost as much as $266,000 each and have been used to reproduce and study cognitive learning processes that a small child would employ. There are about 30 iCub units in laboratories across the European Union (EU), Japan, Korea, Turkey, and the United States, and much of the research is funded by the EU.

The research on robot cognition has even embedded many psychological scientists in the faculty of computer science and math departments, as well as in robotics labs. Among them, in addition to Prescott, is Angelo Cangelosi, a professor of artificial intelligence and cognition in the School of Computing, Electronics, and Mathematics at Plymouth University, United Kingdom. Cangelosi authored the book Developmental Robotics: From Babies to Robots with cognitive psychologist Matthew Schlesinger of Southern Illinois University Carbondale.

Others, like Javier R. Movellan, who founded the Machine Perception Laboratory in the University of California, San Diego (UCSD), Institute for Neural Computation, have taken their work into the private sector. Movellan and two other UCSD scientists decided in 2014 to turn their artificial intelligence technology for reading emotion into a startup company called Emotient, Inc. Earlier this year, Apple, Inc., acquired Emotient for an undisclosed price and hired Movellan and his colleagues in the process.

So how exactly do robotics and developmental research intersect?

Counting With Fingers

European researchers have conducted numerous studies using iCub to explore the cognitive roots of everything from language skills to abstract thinking. A team of European researchers, for example, used iCub technology to study the acquisition of numerical skills. The team — which included psychological scientists Vivian M. De La Cruz and Santo Di Nuovo of the University of Catania, Italy, as well as Cangelosi and Alessandro Di Nuovo (now at Sheffield Robotics) — conducted trials to see if number learning could be reproduced in a robot. The scientists tapped into the robot’s artificial neural networks, which enable it to learn both sensorimotor and linguistic skills, and tested different ways of training number knowledge.

This is a portrait of Linda B. Smith.

APS Fellow Linda B. Smith used robot technology to demonstrate the role that bodily position plays in memorizing words or the names of objects. Smith will deliver a keynote address at the upcoming International Convention of Psychological Science in Vienna, Austria.

The researchers input an audio file of a child’s voice saying the words one through ten into the robot’s cognitive architecture. They found that when the robot’s cognitive system repeatedly heard number words or tags while it moved its fingers, it developed finger and word representations that subsequently helped it learn simple addition. In another study, the researchers found that just learning the number words out of numerical sequence wasn’t as effective in helping the robot develop number knowledge. Smith employed iCub to study the way “objects of cognition,” such as words or memories of physical objects, are tied to the position of the body. She collaborated with Anthony Morse, a robotics researcher at Plymouth University, United Kingdom, and developmental psychologist Viridiana L. Benitez of the University of Wisconsin-Madison. In a series of experiments, the scientists essentially applied Smith’s research on how children learn through interaction with their environment to replicate the way cognitive processes emerge from the physical constraints and capacities of the body.

In one experiment, a robot first was shown an object situated to its left, then a different object to the right. That process was repeated several times to create an association between the objects and the robot’s two postures. Then, with no objects in place, the robot’s view was directed to the location of the object on the left, and the robot was given a command that elicited the same posture from the earlier viewing of the object.

Next, the two objects were presented in the same locations without being named. And finally, the objects were presented in different locations as their names were repeated, at which point the robot turned and reached toward the object now associated with the name. In fact, the robot consistently indicated a connection between the object and its name during 20 repetitions of the experiment.

But in subsequent tests in which the target and another object (a foil) were placed in both locations (right and left) — so as to not be associated with a specific posture — the robot failed to recognize and reach toward the target object.

“The robot failed because the target and foil were not associated with distinct postures, providing no way to map internal representations of the target, rather than the foil, to the name,” the authors explained.

When the study was replicated with babies ages 12 to 18 months, there were only slight differences in the results: The infant data, like the robot’s data, implicated the role of posture in connecting names with objects.

“A number of studies suggest that memory is tightly tied to the location of an object,” Smith said. “None, however, have shown that bodily position plays a role or that, if you shift your body, you could forget.”

Smiling Games

iCub isn’t the only robot in town. Movellan and his colleagues have used Diego-San, a sophisticated robot with a childlike face, to study the interactions of infants and their mothers, with a specific focus on smiling.

In a study with computer scientist Paul Ruvolo of Olin College of Engineering in Massachusetts and psychological scientist Daniel S. Messinger of the University of Miami, Movellan set out to find out why mothers and infants time their smiles the way they do.

The researchers first studied weekly face-to-face interactions of 13 mother–infant pairs. They found that babies didn’t just smile in response to their mothers’ grins; they also did it to get their mothers to smile back, and used rather sophisticated timing strategies to do so. They appeared to smile only long enough to keep their mothers smiling — in other words, to maximize the amount of time they were being smiled at but not smiling themselves.

To validate the results, Ruvolo and his colleagues used Diego-San — programmed to behave like the babies from the earlier experiment — to play smiling games with 32 undergraduate students. In each interaction, the “robobaby” tracked the participant’s face using a combination of eye and head movements. And just like the infants, Diego-San maximized the amount of smiles it could elicit from the participants while minimizing the number of smiles it had to make itself.

The researchers say the results provide new insights into the development of typical and atypical social behavior. For example, the technique used in the study could be used to analyze the interactive behavior of children who are at high risk for developing autism spectrum disorders, they said.

Babybots

So deep are the collaborations between developmental psychologists and roboticists that they hold an annual conference under the auspices of the Institute of Electrical and Electronics Engineers (IEEE). This year’s Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics was held September 19 to 22 in Paris, France, with keynote speakers including Prescott; psychological scientist Julie Grezes of the Cognitive Neuroscience Laboratory in Paris; and APS Fellow Daniela M. Corbetta, a developmental researcher at the University of Tennessee, Knoxville.

A highlight of this event is the Babybot Challenge, in which participants select from a list of three infant studies and design a robotic model that captures infants’ performance on the chosen task. This year’s challenge was centered on Corbetta’s research on babies’ reaching — how they learn to bring their arms in contact with a wanted object such as a toy. Using eye-tracking technology, Corbetta has shown that infants often use their sense of body position and movement, rather than their vision, to begin controlling and directing their reach. In the Babybot Challenge, roboticists proposed ways to implement and test that embodied cognitive process in a humanoid robot.

Funding agencies are showing considerable interest in these collaborations. Last year, Smith and her Indiana University colleague Chen Yu, along with computer scientists from the Georgia Institute of Technology, received $700,000 from the National Science Foundation (NSF) for research focused on how children learn to recognize discrete categories of objects. The Georgia Tech researchers will use the data to design machine-learning models that mimic toddlers’ ability to recognize objects. This ultimately could generate new, sophisticated digital object-recognition technology, the scientists say. NSF has also awarded Corbetta and researchers in the University of Tennessee’s Department of Electrical Engineering and Computer Science a $447,000 grant to acquire a robot and eye-tracking equipment for studies on grasping.

While these research initiatives may conjure up visions of robots that learn at a pace on par with (or even faster than) humans, they also could expand our understanding of cognitive development in the earliest months and years of life, as Smith notes.

“I firmly believe that field-changing insights — for both fields — will result from these collaborations,” said Smith. “For developmental psychology, the promise is both better theory and new ways to test theories by manipulating the pathways and experiences using robots.”

Linda B. Smith will be a keynote speaker at the 2017 International Convention of Psychological Science, to be held March 23–25 in Vienna, Austria.

References and Further Reading

Cangelosi, A., & Schlesinger, M. (2015). Developmental robotics: From babies to robots. Cambridge, MA: MIT Press.

Corbetta, D., Thurman, S. L., Wiener, R. F., Guan, Y., & Williams, J. L. (2014). Mapping the feel of the arm with the sight of the object: On the embodied origins of infant reaching. Frontiers in Psychology, 5, 576. doi:10:3389/ fpsyg.2014.00576

De La Cruz, V. M., Di Nuovo, A., Di Nuovo, S., & Cangelosi, A. (2014). Making fingers and words count in a cognitive robot. Frontiers in Behavioral Neuroscience, 8, 13. doi:10:3389/fnbeh.2014.00013

Metta, G., Natale, L., Nori, F., Sandini, G., Vernon, D., Fadiga, L., … Montesano, L. (2010). The iCub humanoid robot: An open-systems platform for research in cognitive development. Neural Networks, 23, 1125–1134.

Morse, A. F., Benitez, V. L., Belpaeme, T., Cangelosi, A., & Smith, L. B. (2015). Posture affects how robots and infants map words to objects. PLoS One, (10)3: e0116012. doi:10.1371/journal.pone.0116012

Ruvolo, P., Messinger, D., & Movellan, J. (2015). Infants time their smiles to make their moms smile. PLoS ONE 10: e0136492. doi:10.1371/journal.pone.0136492


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.