The goal of psychological science is to generate reliable and generalizable knowledge about human thought and behavior. Researchers have traditionally conducted studies in independent, localized teams, which often result in relatively small samples collected at a single site. While this traditional approach has been quite effective for understanding some aspects of human psychology, it is often akin to stargazers trying to detect distant astronomical objects with weakly powered telescopes (e.g., Simonsohn, 2015) due to limited resources and access to participants.
To address the limitations of the single-site, small-sample approach, psychologists have started pooling individual resources into large-scale, collaborative, multisite projects (e.g., Many Labs, Pipeline Project, Registered Replication Reports). A stellar example is the Registered Replication Report model supported by APS. These projects involve several researchers from around the world who independently collect data about a previously published effect and pool their results into a publication-bias-free meta-analysis. The results of these projects are collectively much more informative than any of the individual samples could be. Effectively, psychological researchers can assemble “big telescopes” by coordinating their individually modest resources to generate highly informative results.
We would like “big telescope” studies to become commonplace in psychological science. To this end, the first author (CC) recently assembled a network of psychology research labs to regularly contribute to large-scale, multisite, collaborative studies. This network features (a) a democratic selection of studies to be conducted; (b) a diversity of researchers, participants, and research questions; and (c) a strong commitment to open and transparent science. This network has been dubbed the Psychological Science Accelerator (PSA).
The Psychological Science Accelerator
The PSA is a distributed network of laboratories, numbering 207 as of January 30, representing 44 countries on all six populated continents. The network’s mission is to accelerate the accumulation of reliable and generalizable evidence in psychological science, reducing the distance between truth about human behavior and mental processes and our current understanding. Inspired in part by Merton’s scientific norms of universalism, communalism, disinterestedness, and skepticism, our mission is guided by the following core principles: (1) diversity and inclusion with respect to researchers, the locations and sizes of their institutions, and participants; (2) decentralized authority, where decisions at each stage are made by as many team members as possible; (3) transparency, by requiring and supporting practices such as preregistration, open data, analytic code, and materials; (4) rigor, both in the standards for approving individual studies and the process of managing the unique challenges of multisite collaborations; and (5) openness to criticism, by inviting and carefully considering critical feedback from both inside and outside the network, and adjusting policies and procedures as needed.
We have designed the PSA to reflect our mission and core principles. Specifically, our distributed laboratory network is ongoing (as opposed to time- or task-limited), diverse (both in terms of human participants and participating researchers), and inclusive (we welcome ideas, contributions, study proposals, or other input from anyone). In addition, our projects are well-positioned to estimate effect size and heterogeneity of psychological phenomena with rigor and transparency.
Ongoing network. While the Many Labs, Open Science Collaboration, Pipeline Projects, and Registered Replication Report efforts have had substantial success recruiting large numbers of data collection labs, they experience efficiency losses by having to recruit a new network of laboratories for each study. A key benefit of the PSA is that it is a standing network of laboratories, all of which are led by PIs who are willing to collect data for
large-scale, multisite collaborations for the foreseeable future. We have recruited, and will continue to recruit, labs that can be matched with projects immediately and indefinitely. This will drastically reduce the amount of time between deciding upon a promising study and collecting data, thereby accelerating the pace of evidence accumulation.
Diversity. Our standing network of laboratories is broadly distributed geographically. As such, it will provide access to participant populations that are typically hard or impossible to recruit for most psychologists. As you can see from the network map, our team is global; all six populated continents are represented, and we have a moderate (and constantly growing) number of participating labs outside of North America and Western Europe, the most common sources of psychology research. We hope that this global diversity will allow us to begin to address psychology’s longstanding “WEIRD” problem (Henrich, Heine, & Norenzayan, 2010) of relying heavily on undergraduate participants from Western, educated, industrialized, rich, and democratic societies.
Inclusion. We have designed the network to be maximally inclusive of global expertise by establishing an organizational structure that reflects a broad but cohesive set of committees charged with carrying out the network’s mission and day-to-day activities. This structure reflects our interest in making decisions via decentralized authority. Committee members are nominated by network members and voted upon by the leadership team and the chairs of each committee. Mandates for committees that reflect a mix of subfield (heavily social and cognitive at first) and geographical (heavily North American and European at first) areas ensure broad representation along these dimensions. Staggered term limits ensure rotation in opportunities to contribute and representation of varying levels of expertise while still maintaining continuity over time.
Estimating Effect Size and Heterogeneity. One promising feature of our global network lies in its ability to aggregate relatively small investments by individual labs into massive data contributions to psychological science. For example, 50 labs (a very conservative example considering our recruitment progress to date), contributing 50 participants each (again, a relatively conservative participant count for most experimental labs) yields a total N of 2,500 participants for a single study. Further, this hypothetical sample would be more geographically diverse and is likely to be more demographically diverse than any individual sample. Large datasets such as these are necessary complements to the relatively small samples routinely collected by individual labs. They will allow us to precisely estimate the size and direction of effects and model variation in effects due to four classes of moderating factors, namely “(a) the strength of the intervention, (b) the choice of outcome, (c) characteristics of the participants, and (d) the setting and context of the study” (Shrout & Rodgers, 2018, p. 498). As these authors attest, “If effect heterogeneity is considered likely, then many smaller studies done at different times and in collaboration with other labs will be more informative about the heterogeneity than a single large study, although the smaller studies will individually be less precise” (Shrout & Rodgers, 2018, p. 500).
What’s on Tap?
The first three projects that will be tackled by the Accelerator have been selected, and we are preparing for data collection. The first study will be led by Ben Jones and Lisa DeBruine of the University of Glasgow and Jessica Flake of York University. This study will test the generalizability of the valence-dominance model of face perception (e.g., Oosterhof & Todorov, 2008). The second study will be led by Curtis Phills of the University of North Florida. This study will examine whether men and women are equally represented in cognitive representations of minority social categories (e.g., when thinking of a “Black person,” people are more likely to think of a Black man than of a Black woman). Finally, our third study will be led by Sau-Chin Chen of Tzu Chi University. It will examine the extent to which the object-orientation effect, in which language comprehension can guide later perception, extends across numerous world languages. For example, the picture of a flying eagle is identified faster after reading “He saw the eagle in the sky” than “He saw the eagle in the nest.”
How to Get Involved
In sum, the PSA decouples theoretical contributions (solid theorizing, hypothesis generation, study proposals) from the means of data collection. The most promising ideas for the PSA can come from researchers with modest data-collection resources. This will make our work more inclusive of researchers from a broad range of institutions and will serve to diversify and strengthen the pool of participants and address important empirical questions that psychologists can attempt to study on a large scale.
If you are interested in learning more about the PSA, you can visit our website or contact the authors. We are always looking to welcome more researchers into this community. To join us, or to start receiving regular updates about our work (we warmly welcome “lurkers”), please fill out the brief form on the “Get Involved” page of our website. You can expect an email from us within 72 hours of signing up. Our initial email will outline some of the ways you can get involved without committing yourself to any specific contributions. Some example contributions, should you choose to get involved, include: collecting data, reviewing study submissions, serving on one of our operational or advisory committees, and providing feedback on the procedures, policies, and governance of the Accelerator.
References and Further Reading
Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33, 61–83.
Oosterhof, N. N., & Todorov, A. (2008). The functional basis of face evaluation. Proceedings of the National Academy of Sciences, 105, 11087–11092. doi:10.1073/pnas.0805664105
Shrout, P. E., & Rodgers, J. L. (2018). Psychology, Science, and Knowledge Construction: Broadening Perspectives from the Replication Crisis. Annual Review of Psychology, 69, 487–510.
Simonsohn, U. (2015). Small telescopes: Detectability and the evaluation of replication results. Psychological Science, 26, 559–569. doi:10.1177/0956797614567341