Featured

Artificial Intelligence: Your Thoughts and Concerns  

This image was generated with the assistance of DALL·E 2.

Investments in and applications of artificial intelligence have exploded, with more than half of organizations adopting it for use in at least one of their business units, according to McKinsey. Robotic process automation, computer vision, and natural-language text understanding are the most commonly deployed uses, where cybersecurity is the top cited risk, followed by regulatory compliance and privacy. What about AI within the field of psychological science? We asked APS members to identify what they see as the biggest opportunities and/or ethical challenges involving AI. Selected responses follow, excerpted and lightly edited for clarity and length. Our thanks to all who weighed in. 

See all articles from this issue of the Observer.

“…the temptation to confuse predictive power with explanatory power…”

“I think one of the biggest challenges facing the field is to avoid the temptation to confuse predictive power with explanatory power. With earlier generations of AI, it was easy to find cases where they could not replicate human performance, thereby highlighting the fundamental differences between how the AI model accomplished the task versus a human. Modern data-driven AI models are much better at replicating human performance in some tasks (e.g., image categorization). Although it is still possible to find edge cases in which they make errors that are very different from the kinds of errors humans make (e.g., with “adversarial” images), I suspect finding these cases will be harder and harder. As a result, I worry that AI models will come to be treated as theories of human perception and cognition because of their ability to predict human performance without necessarily corresponding to the mechanisms by which humans achieve that performance.”
— Greg Cox


“…biased algorithms that are taken as objective…”

“Biggest opportunities: to use large amounts of data to organize, analyze, and understand complex phenomena. In my case the possibility to use models that can be trained to quantify people’s own narratives and descriptions.  
 
Challenges: To avoid biased algorithms that then are taken as objective but that in fact are polluted by researchers’ unconscious manipulations.”
— Danilo Garcia


“Whether all this will ultimately lead society into a dystopian cyberpunk corporation-led hellscape remains to be seen.”

The opportunities are vast. Unprecedented access to personal data, and new powerful tools for analyzing such data, allows researchers to ask questions that we could never ask before. But this is double-edged. Although many algorithms have become publicly and even freely available, the data they rely upon typically have not. Such data are far more available to private industry—and specifically, big technology corporations—than they are to academic researchers. If researchers want access to the most extensive and most complete data, they often need to partner with or perhaps even join such companies. When academic research partnerships have gone particularly well, academics have been seduced away from research for the public good into proprietary research careers. Whether all this will ultimately lead society into a dystopian cyberpunk corporation-led hellscape remains to be seen. 
— Richard Landers


“…detect hidden patterns and propose creative ways to analyze and visualize…”

“Among the many opportunities I see is the ability of AI to quickly process large amounts of different types of data (behavioral, physiological…), to detect hidden patterns in them, and to propose creative ways to analyze and visualize them. The possibility to be able to rely on AI for psychological testing and data collection in the lab or on the Internet also sounds attractive. Possibly the most significant ethical challenges surrounding AI would be related to legal rights/responsibilities and trust.”
— Elena Tsankova


“…access to important and life-changing services…”

Biggest opportunity is for AI to provide access to important and life-changing services by reducing costs and providing personalized care.
— Matthew Leitao


“…begin to bridge the explanatory gap between neuroscience/brain-related activity and cognition…”

“The two biggest opportunities involving AI, broadly defined to include data science and machine learning (ML), include, first, cross-fertilization in areas of data analytics/statistics and, second, cross-fertilization in areas of computational neuroscience/cognitive science. 

For the first opportunity, many graduate schools still rely on a limited repertoire of statistical procedures (including null-hypothesis significance testing). There is great opportunity for importing data visualization and analytics tools from data science to advance graduate training in data analytics and, more broadly, increase the range of tools available to psychological science for gathering and analyzing data (e.g., wearable sensors, increasing sophisticated geo-mapping, network/graph analytics). 
 
For the second opportunity, I see that AI/ML work provides tools and, perhaps, the language to begin to bridge the explanatory gap between neuroscience/brain-related activity and cognition, including attention, executive function, and psychopathology. For example, the current state of the art in AI/ML (e.g., Google’s BERT in the area of natural language processing) is somewhat advanced over current psychological models of language. Similarly, the development of comprehensive integrated models of disorders such as depression or schizophrenia could be informed by current AI work (e.g., journals such as Computational Psychiatry).”
— Steven Hayduk


“How will we conduct research on a class of intelligent agents that do not possess individual rights?”

“I believe the biggest ethical challenges involving AI in psychological science will occur after we achieve general artificial intelligence. How will we be able to conduct research on a class of intelligent agents that do not possess individual rights? Should IRBs classify these subjects as protected, similar to children or prisoners? Who would arbitrate whether or not researchers could manufacture an intelligent being for the purposes of research? In the case of a PI taking a sudden leave, who would assume ‘ownership’ or ‘responsibility’ over the agent? Although we can look to our animal research collaborators for advice, should we really treat intelligent beings that possess more knowledge than any human being who has ever lived in a similar way to fruit flies and rhesus monkeys?”
— Nicholas Surdel


“…grossly overstated relative to contributions…”

“AI has enormous potential to improve human decision making in a huge range of content domains playing a supporting role. AI robotics can spare the dangers humans face in threatening work environments. The benefits, however, have been grossly overstated relative to contributions for decades; brittleness of systems remains a significant limitation.”
— Jim Staszewski


“…specifying which outcome an algorithm is designed to predict.”

“All algorithms are collaborations with humans. People specify the data algorithms used and the predictions they make. The area in which psychology can contribute most to their design is specifying which outcome an algorithm is designed to predict. Whether algorithms are designed to be imitative (i.e., to do as people do), or whether algorithms are designed to act in accordance with a person’s or societies’ ideals (i.e., to do as people should).”
— Carey Morewedge


“…synthesizing such data to arrive at specific conclusions.”

“The quantitative approach adopted by behavioral scientists has moved from inferential statistics to network analysis. The advent of new tools and techniques used in behavioral research result in big data, and deducing meaningful outcomes from such data remains a challenge. AI has come as a great help. It has opened up a window for synthesizing such data to arrive at specific conclusions.”
Braj Bhushan


“Scientists need to pay attention to applications.”

“AI represents an opportunity to advance the future of work, reducing dangerous work and drudgery and enabling support and coaching to enhance human well-being and effectiveness. Of course, the technology is neutral so whether it is applied to improve well-being or used in nefarious ways to subvert human agency is all in the implementation. Scientists need to pay attention to applications.”
— Steve Kozlowski

Check out Steve Kozlowksi’s Back Page column: “Dumb (but Useful) AI, Smart Teams, and the Promise of Predictive Analytics


“A brain is not a computer.”

“AI helps us by becoming aware of what is necessary for being able to act adaptively with an environment, and what is necessary for being able to solve problems intelligently. However, that does not mean that the human and animal brains do it in the same way. A brain is not a computer.”
Peter Prudon


“…much potential for user harm.”

“Intelligence that can answer anything, anywhere, any time, but also trained on artificial data, with much potential for user harm.”
Name omitted by request


“AI-based assessments and hiring procedures should be subjected to the same level of scrutiny…”

“Using AI could help to increase the speed and efficiency of the hiring process and facilitate new ways of assessing job applicants. However, these advantages may come at a cost. Many organizations are applying AI without fully understanding its legal and practical implications for hiring decisions. AI-based assessments and hiring procedures should be subjected to the same level of scrutiny that traditional approaches have been subjected to for several decades. This includes evaluating the validity of AI-based scores and ensuring that scores are not biased against subgroups of individuals, among other things. More research is needed on these topics to understand how AI-based assessments and hiring procedures can be evaluated effectively given the unique challenges and opportunities that they present.”
Christopher Nye


“…made daily tasks more convenient.”

“The biggest opportunity is that AI and technological advancement have made daily tasks more convenient than they were in the past. One of the biggest ethical challenges with AI in the field is that it has the potential to replace human jobs.”
Gregory Hollenbeck


“But the human experience has the added dimension of sensing and feeling.”

“The sense of presence. AI can do a really great job of synthesizing the data out there. But the human experience has the added dimension of sensing and feeling. AI cannot quite capture it and creates a situation where the unreal can emerge as fact.”
Jerri Lynn Hogg


“…self-learning but not necessarily intelligent.”

“We contend that the fundamental objectives of AI and the neurosciences are entirely akin but the processes fundamentally different. AI is not the same as human intelligence as the latter is not simply the rate of learning, number of trials to skill acquisition, or even the ability to perform on intelligence tests, but rather a general ability for reasoning, problem solving, and learning. Intelligence integrates cognitive functions such as perception, attention, memory, language, and planning.… AI as a consequence is precisely that, artificial and not representative of biological processes, it is self-learning but not necessarily ‘intelligent.'”
Gerry Leisman


“…algorithmic pushing into extremes…”

“Opportunities: code writing, text analysis, paper writing support, chat bot for inventions, finding what human brain cannot hypothesize. Challenges: Simplification, algorithmic pushing into extremes, bubble creation algorithmic anti-minority bias.”
Name omitted by request


“…an uncanny-valley effect, leading to a contrarily negative effect on students’ learning experiences and outcomes.”

“Since the current AI has been able to synthesize human voices, provide adaptive feedback based on students’ various characteristics, and even generate 2D or 3D models of virtual agents quickly and more and more cheaply, it seems to be an economic and convenient way to generate online lessons and might provide more suitable help for students with various needs. However, the unadvanced enough AI technology, such as the robotic-like synthesized voice or the stiff agent model, might provide an uncanny-valley effect, leading to a contrarily negative effect on students’ learning experiences and outcomes. Therefore, I believe the usage of AI in education field has a bright future, but we still need to figure out how students can receive the benefit from those AI-generated voices and virtual agents, and which characteristics we can adapt in the feedback to benefit students more in their learning processes.”
Fangzheng Zhao


“We now diagnose a disease and explain why AI decided on this diagnostic result.”

“Specifically, in medical studies, I believe there are vast opportunities. From previous studies, we see that AI helps in the diagnosis and prognosis of diseases for early prediction purposes. This shows the difference between the statistical approach and more complex ML architectures. Also, given the common challenges such as data sparsity, imbalanced class, high dimension, etc., AI helps to address these problems and creates successful model performance despite them. Also, in recent years, explainable AI helps us to understand the reason for the predicted results. We now diagnose a disease and explain why AI decided on this diagnostic result. This will bring us a lot of opportunities.”
Gozde Demirci

Related content: Learn more about both Fangzheng Zhao’s and Gozde Demirci’s research in this issue’s installment of Up-and-Coming Voices.


“The deeper influence has yet to be determined for better and for worse.”

“AI has the ability to enhance the world we live in in ways that we do not yet fully understand. Likewise, it holds dangers that we do not fully understand. Some of these possibilities are predictable, such as AI making tasks challenging for humans easier to accomplish, or AI influencing our decision making and introducing accidental biases. But that’s the low hanging fruit. The deeper influence has yet to be determined for better and for worse.”
Brian Nolan

Back to Top

Feedback on this article? Email apsobserver@psychologicalscience.org or login to comment. Interested in writing for us? Read our contributor guidelines


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.