Interplay Between Humans and Algorithms the Focus of Journal Special Collection

In 2022, 30% of U.S. adults used an online dating site, and 10% of them found their match. 

Sixteen percent of U.S. adults in a recent survey said they used the generative artificial intelligence (AI) platform ChatGPT at work, and 19% said that such AI technologies will significantly affect their jobs.  

In a global survey, 28% of respondents said they use social media as their primary source of news, allowing algorithms to shape what they see and believe about the world.   

These findings reflect the deepening entanglement we’re facing with computer algorithms. A special collection of articles in Perspectives on Psychological Science provides insights from leading researchers on the interplay between humans and algorithms.   

“In recent years, we have seen many social and political problems that stem primarily from the misalignment of algorithmic and human objectives,” wrote the guest editors, APS Fellow Sudeep Bhatia (University of Pennsylvania) and Santa Fe Institute scientists Mirta Galesic and Melanie Mitchell, in an introductory editorial. They noted that algorithms 

  • maximize click rates rather than quality content, 
  • reinforce human biases when used in criminal justice and policing, and 
  • often interact and are trained on data from other algorithms, furthering possibilities for unexpected interactions with human users. 

Psychologists have an important opportunity to shape research on the human component of algorithms, the editors said. They noted that the National Science Foundation in 2023 provided $140 million in funding for seven new AI institutes, many of which will conduct research on the intersection of AI and the social, behavioral, and cognitive sciences.  

The special issue includes perspectives on how algorithms influence our lives and how research findings can inform the design of better algorithms. The 15 articles cover such topics as how social media algorithms shape offline civic participation, why algorithms should infer mental states and not just predict behavior, and what children can do that large language models cannot.  

Read the collection, “Algorithms in Our Lives,” here

Feedback on this article? Email [email protected] or login to comment. 


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.