There’s No Ghost in the Machine: How AI Changes Our Views of Ourselves

Aimed at integrating cutting-edge psychological science into the classroom, columns about teaching Current Directions in Psychological Science offer advice and how-to guidance about teaching a particular area of research or topic in psychological science that has been the focus of an article in the APS journal Current Directions in Psychological Science.

More teaching resources from the Observer: Empirical Evidence Is My Love Language


Bender, E. M. (2024). Resisting Dehumanization in the Age of “AI”. Current Directions in Psychological Science0(0). https://doi.org/10.1177/09637214231217286

Have you ever wondered if someone you were communicating with was real or an artificial intelligence (AI)? ChatGPT‘s launch in November 2022 threw higher education into a tizzy. Naysayers bemoaned how students would use artificial intelligence to cheat. Many faculty kicked off the new year by modifying their assignments to make them AI-proof. This was a Sisyphean task as every week saw ChatGPT evolve. Even those optimistic about generative AI’s potential to improve learning sometimes saw it as the spectral rise of the machine. 

What many people have missed in the excitement and concern to find ways to let AI make work easier is that the advent of AI involves dehumanization (Bender, 2024). Large language models such as ChatGPT are trained to respond with sequences of words when fed sequences of words. The output makes sense. The output even seems to come from a sentient being. But the reality is that like stochastic parrots, a term coined to capture this, AI stitches together words similar to previously programmed patterns and without any reference to meaning (Bender et al., 2021). 

Because AI seems to be human, it has the result of paradoxically making actual humans seem less so. Bender (2024) outlines six ways AI contributes to dehumanization, providing fodder for a variety of classes and to aid discussions of racism, sexism, White privilege, transphobia, emotion, and many more topics. For example, Bender discusses how metaphors comparing the brain to a computer can be reversed to view the computer as a brain. Giving AI human-like qualities results in making the rational computer seem better than an emotional human.

There is a lot to be wary of when using AI. Bender (2024) nicely alerts the reader to issues such as digital physiognomy—the use of AI to attempt to predict sexual orientation or political affiliation from photos, voice samples, or videos. The author describes the shortcomings of the data used to train most AIs and how AI programming reinforces a White world view (e.g., AI voice assistants speak like White people do). 

The student activities described will help students critically analyze claims of AI capabilities while becoming more familiar with some of its problems. 

Student Activities

Feedback on this article? Email [email protected] or login to comment.

Reference 

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell. M. (2021). On the dangers of stochastic parrots: Can language models be too big? In FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610–623). Association for Computing Machinery. https://doi.org/10.1145/3442188.3445922 


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.