The Cooperative Revolution Is Making Psychological Science Better

Psychology is currently experiencing something of a revolution, and it is a cooperative one. It’s a response to psychology’s so-called “replication crisis,” a period of self-reflection that revealed problematic flexibility in data analysis, publication bias (null findings are less likely to be submitted and/or published than positive findings), and disappointingly low replicability rates — challenges that are also playing out across disciplines ranging from cancer biology to neuroscience.

But change is coming. In the past decade, the openness and frequency of communication among psychological scientists has increased sharply, allowing researchers to initiate world-wide projects involving the collaboration of dozens of researchers, and pushing academic journals and professional organizations to rapidly adopt practices to increase the rigor and transparency of published work. Thanks to the Internet, graduate students and researchers from less privileged institutions share the same platforms as their more privileged peers; information about new research, methods, and statistics are routinely discussed by scholars from all over the globe; and geographical distance is less of a barrier to collaboration than at any point in history.

In short, this is an exciting time to be in the field of psychology. We are encouraged by the hope and passion for our science that we see in our colleagues, and we believe this cooperative revolution will push our science to become more collegial, rigorous, progressive, and inclusive.

Criticism, Cooperation, and Collaboration

Our optimism should not be confused with naivety. We acknowledge that the debates surrounding methodological reforms have been accompanied by clear cases of incivility, both in public (e.g., calling other researchers names on social media) and in private (e.g., threatening a colleague’s career over email). We believe it’s important to foster a scientific discourse that can be both strongly critical and also civil. Incivility is by no means necessary to get a critical point across, and if anything only distracts from the message. Importantly, psychological scientists are now having conversations about how to establish new norms for critical debates that occur in increasingly public spaces such as blog posts and social media.

Science benefits when critique is not made personal and not taken personally — but scientists are closely associated with their findings, and having your work criticized is undeniably hard. We all experience this—when reading critical reviews on a manuscript or grant proposal, for example. But being wrong is a basic part of being a scientist. And for science to be self-correcting, we need to embrace critique. If someone points out errors in our work, we need to take a deep breath and objectively evaluate if we erred. Paraphrasing James Heathers: Science is not about producing research, but about producing knowledge, and for that we need to be deeply critical of our own work and that of others.

But embracing a spirit of strong criticism is not sufficient for good science. We need an environment of cooperation and collaboration. And, in our opinion, the past decade has been largely cooperative and positive, with the vast majority of reform-minded researchers operating in a tactful and professional manner. They are providing a valuable service to both producers and consumers of science and are driven by a heartfelt desire to improve their field and its public reputation. Here, we highlight a few examples (many of which we are involved in) of supportive communities, civil conversations, statistical innovations, collaborative teaching materials, diverse and inclusive lab cultures, large-scale research collaborations, and technological infrastructure to support it all.

Supportive Communities and Constructive Conversations

Social media and other online platforms have provided researchers with new ways to connect with other scholars, share and debate ideas, and learn from one other. Conversations occur in real time instead of unfolding in slow motion in the pages of printed journals. They can also be structured in ways that encourage people from all career stages to participate rather than ways that engage only a limited number of eminent insiders.

A good example of constructive online conversation is the Facebook discussion group PsychMAP, a space for researchers to engage in constructive, open-minded, and nuanced conversations about psychological methods and practices. The group has thousands of members from around the world, from undergraduate students to senior professors. Discussions range from critical-yet-civil debates about scientific priorities or the strength of a given literature to specific questions about the best way to analyze a particular kind of data. The group is lightly moderated by four scholars who on rare occasions step in to steer conversations away from focusing on specific individuals and toward asking broader questions about methods and practices. A Community Board of scholars from a diversity of research areas, career stages, and institution types has helped the group develop the breadth and inclusivity of both the topics discussed and the range of people who contribute to discussions. Spaces such as PsychMAP have created opportunities for faster, more broadly accessible, and more inclusive scientific discourse.

We also see the spirit of cooperation, constructive discussion, and concrete progress in the Society for the Improvement of Psychological Science (SIPS), a recently created academic organization aimed at bringing scholars together to improve methods and practices in psychological science. SIPS has held three meetings, at which there are no prescheduled lectures  but rather a mixture of hands-on workshops and “hackathons” geared toward learning a new skill or solving a concrete problem. The conferences have also included lightning talks and creatively structured sessions where attendees got to pitch and discuss new ideas. Several of the projects we mention in this letter arose from brainstorming at SIPS meetings. Notable features of this conference include the number of pre-tenure academics and graduate students in attendance, and the significant representation from teaching-focused colleges and other schools outside the well-funded and high-profile R1 universities.

A quick way to hear what participation in (and reflection on) these open science movements can actually sound like is to listen to the excellent conversations on podcasts such as the “Black Goat” (hosted by APS Board Member Simine Vazire, Alexa Tullett, and Sanjay Srivastava), “Everything Hertz” (James Heathers and Daniel Quintana), and “Circle of Willis” (APS Fellow Jim Coan and weekly guests). These podcasts engage with subjects such as improving research practices, scientific communication, interprofessional dialogue, and personal growth. Notably, these podcasts are often informed by audience questions. “Black Goat” discussions, for instance, frequently incorporate listener feedback and questions gathered from Facebook, Twitter, and emails.

New Statistical Tools and Technological Infrastructure

The cooperative revolution has also benefited from exciting statistical innovations. Scientists are designing new tools to provide straightforward solutions to some of the problems related to the replicability crisis. For instance, analyses show that roughly half of published psychology articles contain inconsistencies in reported means or statistical test results. Although these errors can easily arise through honest mistakes (e.g, copy/paste errors), they can distort substantive conclusions. To facilitate the detection of such errors, free programs and apps such as GRIM, statcheck, and p-checker have been developed. Researchers can easily use these tools to quickly screen their manuscripts for errors before they submit them to a journal.

Researchers and funders have also invested heavily in the infrastructure supporting our increasingly collaborative science, developing online platforms to archive and share our data and research materials (e.g., Databrary, the Open Science Framework) and platforms devoted to archiving the cumulative evidence for the empirical effects underlying our theories (e.g., Curate Science). These resources create a flexible ecosystem of tools that researchers can use to speed up their current workflows and more easily adopt new practices. Many of the projects we discuss have both used and contributed to this research infrastructure to implement practices such as preregistration to meet specific research needs (e.g. PsyArxiv, OSF for Meetings), and to ease the burden of coordinating large research teams. The value of tools like these—both during these projects and after the research is complete—cannot be overstated.

Collaborations

Perhaps the most visible hallmark of the cooperative revolution has been the dramatic increase in large-scale collaborations across many areas of psychological science (e.g., ManyLabs, ManyBabies, the Psychological Science Accelerator, Registered Replication Reports, and StudySwap). Traditionally, most basic research in psychology has been conducted by individual university labs, which limits the scope of the projects a researcher can take on, especially in areas that involve studying specific hard-to-recruit participant groups (young children, bilinguals, people with specific clinical diagnoses, etc.) rather than convenience samples (e.g., college students participating in studies for course credit). Large-scale collaborations create new opportunities for conducting highly powered studies and testing moderators even in resource-intensive topic areas.

For example, by pooling the effort of more than 50 labs, the ManyBabies project has begun to build large datasets that both validate important findings in developmental psychology and provide a context in which to debate how we can best collect and analyze these data. By the middle of 2018, ManyBabies had conducted the largest lab-based study of infant cognition that we are aware of to date: a study on infants’ attention to child-directed speech (“baby talk”) with more than 2,700 young participants.

And although psychology has often been criticized for focusing too heavily on the United States and other western countries, the recently launched Psychological Science Accelerator is a globally distributed network of psychological science laboratories (currently 210), representing 45 countries on all six populated continents, that will collect data for democratically selected studies. The network’s mission is to “accelerate the accumulation of reliable and generalizable evidence in psychological science, reducing the distance between truth about human behavior and mental processes and our current understanding.” The Accelerator is committed to five core principles that reflect the ethos of the cooperative revolution quite well: (1) diversity and inclusion with respect to researchers and participants; (2) decentralized authority; (3) transparency, by requiring and supporting practices such as preregistration, open data, analytic code, and materials; (4) rigor, both in the standards for approving individual studies and the process of managing the unique challenges of multisite collaborations; and (5) openness to criticism, by inviting and carefully considering all critical feedback.

Open and Inclusive Teaching and Training

The spirit of cooperative material sharing has extended to how we teach the next generation of scientists. Psychologists have worked together to generate excellent reading lists on methods reforms and reproducibility, which they then share freely with others to use in both graduate and undergraduate courses. Some have even taken the extremely cooperative step of posting complete syllabi and course materials to be amended and adopted by any instructor who finds value in bringing those materials into their classroom.

Platforms such as Twitter make space for new kinds of discussions, which — because they are so public — can have strikingly fast impacts on the culture of psychological research. One example is the question of how undergraduate students become involved in research labs, a critical type of experience for applying to graduate school in the sciences. A relatively large and diverse group of scientists (link) has been discussing the importance of offering paid research opportunities instead of purely voluntary positions. Pro bono research work is simply not realistic for students who need summer income to pay their way through school and systematically excludes some students from getting research experience at a critical point in their careers. Anecdotally, we have noticed a promising uptick in advertisements for paid summer internships in some excellent labs in psychological science.

In a similar way, Twitter and Facebook have provided forums for discussion about codes of conduct (see the code of conduct for SIPS attendees here), which have been common in software development and related fields for several years. As part of the much longer conversation in academia sexual harassment and assault in academia, professional conferences in psychology have begun to adopt these codes of conduct as a way to clarify the norms of behavior we expect from our colleagues and to respond to people who violate these expectations.

Moving forward: A Call to Action

Psychology’s revolution is well underway and is gaining momentum. Though considerable progress has already been made, there is still much work to be done, and we cannot do it alone. Whether you are a student just getting your hands dirty in lab work, an early-career researcher carving out your scholarly niche, or a well-seasoned professor with decades of experience, we hope that you will join us in our pursuit of research integrity, transparency, and rigor.

Most researchers chose the profession for the same basic reasons: to gain knowledge about the world; to make a difference; to advance and improve society. These goals remain a shared feature of our work, regardless of the area of inquiry.

As we move forward, we are calling on all researchers (ourselves included) to continuously improve the rigor of their work. As researchers, we owe it to the scientific community and to society to produce the best research possible. This includes making our data available, thoughtfully engaging with criticism when it arises, and admitting when mistakes have been made — behaviors that reflect the scientific ideals of verifiability, organized skepticism, and fallibilism. We all make mistakes. The revolution is our attempt to identify these mistakes more quickly, to advance a cumulative science through cooperation and collaboration, and to grow as researchers and as a field.


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.