Artificial Intelligence Systems Excel at Imitation, but Not Innovation 

Artificial intelligence (AI) systems are often depicted as sentient agents poised to overshadow the human mind. But AI lacks the crucial human ability of innovation, according to findings published in Perspectives on Psychological Science. 

While children and adults alike can solve problems by finding novel uses for everyday objects, AI systems often lack the ability to view tools in a new way, researchers at the University of California, Berkeley concluded. 

AI language models like ChatGPT are passively trained on data sets containing billions of words and images produced by humans. This allows AI systems to function as a “cultural technology” similar to writing that can summarize existing knowledge, Eunice Yiu, a co-author of the article, explained in an interview. But unlike humans, they struggle when it comes to innovating on these ideas, she said. 

“Even young human children can produce intelligent responses to certain questions that [language learning models] cannot,” Yiu said. “Instead of viewing these AI systems as intelligent agents like ourselves, we can think of them as a new form of library or search engine. They effectively summarize and communicate the existing culture and knowledge base to us.” 

As part of their Perspectives article, Yiu and Eliza Kosoy, along with their doctoral advisor and senior author on the paper, APS Immediate Past President Alison Gopnik, tested how the AI systems’ ability to imitate and innovate differs from that of children and adults. 

See all of APS Past President Alison Gopnik’s Presidential Columns.

To do so, the researchers presented 42 children (ages 3 to 7) and 30 adults with text descriptions of everyday objects. In the first part of the experiment, 88% of children and 84% of adults were able to correctly identify which objects would “go best” with another. For example, they paired a compass with a ruler instead of a teapot.  

In the next stage of the experiment, 85% of children and 95% of adults were also able to innovate on the expected use of everyday objects to solve problems. In one task, for example, participants were asked how they could draw a circle without using a typical tool such as a compass. Given the choice between a similar tool like a ruler, a dissimilar tool such as a teapot with a round bottom, and an irrelevant tool such as a stove, the majority of participants chose the teapot, a conceptually dissimilar tool that could nonetheless fulfill the same function as the compass by allowing them to trace the shape of a circle. 

Related content: The January/February 2023 Observer: Artificial Intelligence in Psychological Science

When Yiu and colleagues provided the same text descriptions to five large language models, the models performed similarly to humans on the imitation task, with scores ranging from 59% for the worst-performing model to 83% for the best-performing model. The AIs’ answers to the innovation task were far less accurate, however. Effective tools were selected anywhere from 8% of the time by the worst-performing model to 75% by the best-performing model. 

“Children can imagine completely novel uses for objects that they have not witnessed or heard of before, such as using the bottom of a teapot to draw a circle,” Yiu said. “Large models have a much harder time generating such responses.” 

In a related experiment, the researchers noted, children were able to discover how a new machine worked just by experimenting and exploring. But when the researchers gave several large language models text descriptions of the evidence that the children produced, they struggled to make the same inferences, likely because the answers were not explicitly included in their training data, Yiu and colleagues wrote. 

These experiments demonstrate that AI’s reliance on statistically predicting linguistic patterns is not enough to discover new information about the world, Yiu and colleagues wrote. 

“AI can help transmit information that is already known, but it is not an innovator,” Yiu said. “These models can summarize conventional wisdom but they cannot expand, create, change, abandon, evaluate, and improve on conventional wisdom in the way a young human can.” The development of AI is still in its early days, though, and much remains to be learned about how to expand the learning capacity of AI, Yiu said. Taking inspiration from children’s curious, active, and intrinsically motivated approach to learning could help researchers design new AI systems that are better prepared to explore the real world, she said. 

Feedback on this article? Email [email protected] or login to comment.

Reference 

Yiu, E., Kosoy, E., & Gopnik, A. (2023). Transmission versus truth, imitation versus innovation: what children can do that large language and language-and-vision models cannot (yet). Perspectives on Psychological Science. https://doi.org/10.1177/17456916231201401  

Comments

Thanks you very much. For phenomenological/philosophical analyses of AI versus human intelligence, have a look at Thomas Fuchs’ writings:

Thomas Fuchs (2021): In defense of the human being. Oxford University Press

https://www.researchgate.net/publication/355460034_Human_and_Artificial_Intelligence_A_Clarification

https://www.researchgate.net/publication/361625257_Human_and_Artificial_Intelligence_A_Critical_Comparison

https://www.researchgate.net/publication/363919820_Understanding_Sophia_On_human_interaction_with_artificial_agents


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.