Registered Replication Reports

Check_webReproducibility is central to science, but direct replication studies rarely appear in psychology journals because publishing incentives tend to favor novelty over reliability. That is changing.

The Registered Replication Report (RRR) is a new type of article introduced last year by Perspectives on Psychological Science. Like several other new APS initiatives, RRRs are designed to help stabilize the foundations of our science by providing more definitive estimates of the reliability of important findings in the psychology literature.

Conventional replication attempts rarely eliminate all uncertainty about an original finding because myriad factors could explain discrepancies between the original result and the result of the replication. A replication study might yield a smaller effect than the original if the earlier finding overestimated the true effect size or if the replication underestimated it. Discrepancies could result from differences in procedure, flaws in the replication or in the original design, noise in the measurement itself, or an underlying effect that varies widely across different populations. Just like the original finding, a direct replication provides just one estimate of the size of the effect of interest, and that estimate can be noisy.

An RRR provides a more definitive assessment of the size of an effect. The final Report comprises multiple direct replications of a single finding, all using the same vetted protocol and materials. The design is preregistered, and all results are published regardless of the outcome. In effect, the RRR is a planned meta-analysis, but one that is free from several problems that plague conventional meta-analyses such as publication bias, variation in measures, and differences in procedure. RRRs are designed and carried out by multiple researchers with different vested interests, to reduce the influence of experimenter bias.

The process of proposing, conducting, and writing an RRR differs in critical ways from all other types of articles. Several stages are involved, all designed to produce a definitive answer about the size and reliability of an important effect.

Because RRRs, as large-scale replications, require a substantial investment of resources, only studies with high replication value are considered. They must be important to theory or to real-world psychological practice. There must be uncertainty regarding the size and reliability of the effect. Such uncertainty is common for findings that have not previously been the focus of many published replications.

Before an RRR can proceed, the editors must assess the replication value of the original study and, if the replication value is high, subsequently help the proposers develop a detailed, accurate methodology. The process typically begins when a researcher submits a brief (one-page) proposal highlighting why a finding has high replication value. Researchers are welcome to email the replication associate editors (replicationreports@gmail.com) for input before submitting this initial proposal via the journal’s online submission system.

To assess replication value, the editors ask experts in the area to consider several issues. Although many studies merit direct replication, to proceed to the full proposal stage for an RRR, the study must meet many or all of the following criteria:

  1. The study has been highly influential.
  2. It is methodologically sound and the interpretation of its result is unambiguous.
  3. It has not already been directly replicated (much).
  4. It forces a reconsideration of an important theory or establishes the foundation for a theoretical position.

In addition, the relevant theoretical models or the empirical understanding of the studied phenomenon should benefit from a more precise estimate of the effect size, and multiple labs should be interested in and able to participate in the mass replication of the effect.

If the study meets these criteria, the lab that initially proposed a RRR is invited to complete an extensive proposal form designed to elicit all the methodological and procedural details necessary for an accurate direct replication of the original study. The form asks about the details of the original study’s method and how those procedures would be implemented in the proposed study. The form is deliberately granular in order to eliminate any judgment calls by the labs that eventually will follow the protocol. The form also asks the proposer to highlight any methodological information that was missing from the original publication. The editors work with the proposing researchers to make sure that this form is as complete as possible.

During the next stage of review, the editors contact the authors of the original paper (where possible) to solicit their help in refining the protocol. Together, we correct mistakes, fill in missing details known to the original authors but not elaborated in the original paper, identify any necessary manipulation checks, and specify acceptable testing conditions and subject samples. In consultation with the original authors and with the benefit of hindsight, we occasionally identify small methodological changes to the original methods that will improve the methodology for the replication while measuring the same effect. In sum, this stage of review optimizes the protocol by making sure we account for factors thought to diminish the original effect and by implementing all of the conditions believed to increase the chances of reproducing the original effect. This stage often involves a back-and-forth, constructive process, with the editors acting as neutral brokers mediating the development of an unbiased protocol.

Once we reach consensus, the next stage of the process involves distilling the extended proposal form into a concise, precise protocol description that specifies how the replication must be conducted. Once this protocol is finalized, APS posts a public call for laboratories interested in participating in the RRR. Interested labs submit a Secondary Replicator Form (SRF) describing their qualifications to conduct the study and explaining how they would meet the protocol requirements in their own setting. These SRFs are reviewed by the editors. After the lab’s participation is approved, they conduct a direct replication that follows the protocol, preregistering the details of their plan on the Open Science Framework website (see Nosek, p. 12). This preregistration ensures that participating laboratories do not change their methods in ways that could affect the outcome, and the editors review each plan for accuracy before it is registered.

The results from all the replicating laboratories are published together in a single report, regardless of their outcomes, with researchers from all the participating labs included as authors. The core of this final RRR is a figure depicting the measured effect size from each lab (with confidence intervals), along with a meta-analysis of the effect size. The graph and paper emphasize the size and reliability of the effect rather than a binary judgment of whether or not the original effect “replicates.”

If you are interested in participating in an RRR, monitor the Perspectives on Psychological Science website and the Observer. Each project is announced in those media as well as via social media such as the APS Facebook page and Twitter account. A number of proposals are working their way through the review process now. For some projects, Perspectives and APS can provide grants to support these efforts. If you would like to propose an RRR, email the replication editors at replicationreports@gmail.com.

References

Brandt, M. J., IJzerman, H., Dijksterhuis, A., Farach, F. J., Geller, J., Giner-Sorolla, R., … van’t Veer, A. (2014). The replication recipe: What makes for a convincing replication? Journal of Experimental Social Psychology, 50, 217–224.

Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific utopia II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science, 7, 616–631.


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.