AMPPS Makes Its Entrance

The first issue of APS’s newest journal Advances in Methods and Practices in Psychological Science (AMPPS) debuts this month. This one-of-a-kind journal publishes new types of empirical work and articles and tutorials that reflect the various approaches to research across the field. The journal’s editorial scope encompasses the breadth of psychological science, with editors, reviewers, and articles representing a balance among diverse disciplinary perspectives and methodological approaches. Many of the articles are already online.

In his editorial for the opening issue, AMPPS Editor Daniel J. Simons, University of Illinois, discusses the journal’s mission, its structure, and its leading role in advancing APS’s overall leadership in fostering scientific transparency, openness, and reproducibility. Below is a reprint of that editorial, which also appears online.

For decades, experts like Cohen, Meehl, de Groot, Cronbach, Loevinger, and many others repeatedly raised concerns about small-sample studies, questionable research practices, poor design, noisy measures, violated statistical assumptions, flawed inferences, a lack of direct replication, and publication bias (Cohen, 1962; Cronbach & Meehl, 1955; de Groot, 1956/2014; Loevinger, 1957; Meehl, 1967). Although these problems linger, I am more optimistic about the state of our field now than at any earlier point in my career.These are exciting times for psychological science. The past 7 years has seen a dramatic and field-wide transformation, with more and more people interested in evaluating and improving their own research practices and those of the field as a whole. Discussions of research practices have gone mainstream, and changes to research and publishing practices are happening faster now than at any point in our field’s recent history. The primary mission of Advances in Methods and Practices in Psychological Science (AMPPS) is to foster such discussions of and advances in practices, research design, statistical methods.

Less than 10 years ago, nobody had heard the terms “p-hacking” or “researcher degrees of freedom” (Simmons, Nelson, & Simonsohn, 2011) and few knew the problems with “HARKing” (Kerr, 1998).1 Preregistration was rare outside of clinical trials; stand-alone direct replications were barely publishable; and multilab collaborations were uncommon. Badges and incentives for open practices were nonexistent. Facebook groups were not actively discussing research methods and practices. The Transparency and Openness Promotion (TOP) guidelines for publishing, spearheaded by the Center for Open Science and now adopted by more than 5,000 journals and organizations (including APS), had not yet been conceived. Few journals, funders, or societies had established guidelines for data sharing. Novel article formats such as Registered Reports — in which reviewers evaluate a study’s rigor and design before data collection (Chambers, 2013; see here for more information) — were not yet among our publishing options.

In many ways, APS has been a leader in supporting improved research and reporting practices. With Bobbie Spellman as Editor, Perspectives on Psychological Science published a series of groundbreaking articles on research practices, and as Associate Editor, Alison Ledgerwood organized several special sections on research methods and metascience. Perspectives also launched Registered Replication Reports as a new way to evaluate the strength of evidence for important effects (Simons, Holcombe, & Spellman, 2014; AMPPS will be their new home). At Psychological Science, Eric Eich implemented changes to reporting practices to allow more comprehensive method and results sections and more transparent and complete reporting, and he incentivized transparency by awarding badges for open data, open materials, and preregistration. His successor, Steve Lindsay, has continued that tradition by adding consulting statisticians to the journal editing team, asking authors to make their data and materials accessible to the editors and reviewers, and requesting that authors report on their use (or nonuse) of open science practices. Steve Lindsay also adopted a variant of the Pottery Barn rule (Srivastava, 2012) by creating an article format for replications of studies published in Psychological Science (Lindsay, 2017). As editor of Clinical Psychological Science, Scott Lilienfeld also adopted badges and reporting standards that incentivize best practices.

The APS Observer magazine publishes a yearly methods issue along with articles and tutorials on a wide range of methodological and statistical topics (e.g., Bayesian analysis, sample-size planning, the “new statistics,” R programming, and preregistration). And the annual APS convention includes a methodology track featuring presentations about research practices and practical, hands-on workshops intended to help psychological scientists improve their research. Those sessions have consistently drawn large crowds, especially early-career researchers.

In launching AMPPS, APS hopes to reach a broad audience, consolidating in a single outlet a range of novel approaches to experimentation (e.g., the Registered Replication Reports), papers on metascience and best practices, and tutorials on research methods and practices. Like all APS journals, AMPPS emphasizes both innovation and accessible communication, with a mandate to help researchers from across psychological science to improve the quality of their research and the rigor of our discipline.

The Audience for AMPPS

Improved research practices require clear channels of
communication between statisticians/methodologists and psychological researchers (Sharpe, 2013). Reaching the broad audience of researchers who want to improve their methods and skills is core to the mission of AMPPS.

Although AMPPS has “methods” in its title, it is not a traditional methods/statistics journal. Several excellent methods and statistics journals in psychology regularly publish state-of-the-art developments, but most target a readership of expert methodologists and statisticians; they speak to methodologists interested in research, not researchers interested in methods or researchers interested in research. In recent years, some have pushed for improved accessibility in order to reach a broader audience (Harlow, 2017). AMPPS makes broad access core to its mission. The primary audience for AMPPS is the broad spectrum of psychological scientists who are interested in learning more about methods and practices but who do not regularly read method journals. Unlike other methods-focused journals, AMPPS will not publish articles written exclusively for methods experts. Articles in AMPPS will convey important advances but will be written for research producers and consumers; it is a place to communicate innovative methods and to discuss practices in a way that is broadly understandable.

To ensure accessibility of the prose, the main text of all papers should be written in plain English, with all terms defined and explained. The prose should draw in researchers, helping them to understand core issues of relevance to them. AMPPS balances this need for accessibility with the importance of precision by encouraging the use of “in-detail” boxes where authors can convey the more technical content and equations necessary for a full understanding. These boxes are ideal for content that is not strictly necessary to understand the conceptual point of an article but that adds to a deeper understanding (e.g., glossaries of technical terms, worked case examples, derivations, proofs). Readers who choose to skip the in-detail boxes should be able to understand the main ideas in any article in AMPPS. The main text of the article should be a gateway to greater understanding — get a broad audience hooked and encourage them to learn more.

Types of Articles

The submission guidelines for AMPPS include details about the types of articles and their required formatting. As of its launch, AMPPS accepts three main article types: general articles on research practices, empirical articles featuring innovative research methods and practices, and tutorials describing the “how to’s” of a research method or practice. It will also feature special collections of invited articles, on occasion, to discuss and debate issues of broad interest in the field. For example, the first issue includes a collection of papers on making data as available as possible, focusing especially on cases in which making data publicly available is challenging for practical or ethical reasons. The second issue will contain a forum with practical and philosophical guidance on how to provide evidence against the presence of a meaningful effect.

General articles in AMPPS can address a wide variety of topics, including research practices, metascience, simulation studies, reinterpretation of earlier findings using new analytical approaches, evaluations and comparisons of different practices, critiques, debates, and so on. All should consider the practical importance of the issues for the practices of researchers across psychology. General articles may also include structured debates, collections of articles on a theme, methodological commentaries, or other more interactive content intended to convey different perspectives on a problem.

Empirical articles in AMPPS differ in scope/structure from those appearing in Psychological Science and Clinical Psychological Science. AMPPS will not publish single-lab empirical papers that have a natural home at other APS journals (except, perhaps, in cases where the focus is entirely on a methodological issue). Empirical articles appropriate for AMPPS should adopt novel approaches to research, often involving large-scale, multilab collaborations: consortium studies, adversarial collaborations, ManyLabs projects, Registered Replication Reports, and so on.

Empirical research published in AMPPS typically will have been preregistered. Note that preregistration does not preclude a complete and careful evaluation of the data and evidence; exploration is the engine of discovery and the source of new hypotheses even if it does not support confirmatory hypothesis tests (see Lindsay et al., 2016). Except in rare cases, authors of empirical articles should make all materials, code, and deidentified data as publicly available as possible. Some of these multilab empirical projects will be registered reports, undergoing review of the introduction, methods, and analysis plan prior to data collection, with provisional acceptance in advance of knowing the outcome.

Tutorials are the most practical of the articles appearing in AMPPS. Some may provide an introductory overview of an important concept, and others will introduce new tools and techniques. They provide concrete guidance to researchers, allowing them to acquire new skills and better use existing ones. Like the other articles in AMPPS, tutorials need not focus exclusively on statistics and methods; they can also discuss broader issues like lab management practices and other practical issues that affect the field. Tutorials on practical techniques should be written with an eye toward adoption in research methods and statistics courses, and they should indicate any prerequisite skills or knowledge necessary to make use of them. They must cover topics that would be useful in many areas of psychology and not only to specialists within a subfield.

Standards for the Peer Review Process

The review process at AMPPS is modeled after the process used at Psychological Science. Each article is initially reviewed by the editor in chief and one or more associate editors to evaluate whether it is a fit for AMPPS based on whether or not it adheres to four core principles:

Accessibility: Articles should be accessible to and understandable by nonexperts. Authors should aim to make their articles understandable to a first-year graduate student in psychology who has taken one or two introductory statistics courses.

Relevance: Articles should convey why the contents are important to the field as a whole and not just to a small subset of the field. A core goal of AMPPS is to bridge subfields of psychology by communicating useful approaches developed in one area to the field as a whole. The ideal article will address both principles and practices using concrete examples that will be interesting to psychologists in any subfield.

Rigor: Articles in AMPPS should adhere to and document their use of best practices in research methodology, statistics, and reporting.

Transparency: Articles should adhere to principles of open science and transparency, both illustrating best practices and informing about them.

Articles that clear this editorial review stage will be sent for external review, and those that do not will be declined (i.e., “desk rejected”). In some cases, when the editors feel that a submitted manuscript could be revised to meet these core principles (e.g., if it could be rewritten to be more accessible to our audience), they may encourage a revision prior to external review. Once a paper proceeds to external review, the process is similar to that of other journals.

Although AMPPS does not have strict page limits for articles, the submission guidelines give guidance on the lengths for each article type, and authors should contact the editor prior to submitting a manuscript that exceeds those guidelines. Authors should keep introductory material focused on the specific issue addressed in the article, honing in on the key point quickly and concisely. For example, unless a paper is about the reproducibility crisis or is a historical review of closely related issues, it should not cover the “crisis” as background or motivation.

Concluding Thoughts

Twenty-five years ago, in an introductory graduate statistics course he cotaught with Don Rubin, Bob Rosenthal spoke of the importance of thinking in terms of real-world consequences and effect size rather than p-values. He highlighted the dangers of treating p <.05 as a magic threshold, the need for quantitative synthesis, and the ways that practices like optional stopping undermine inference. His admonitions about questionable practices and recommendations for improved ones made a lasting impression on me, but one bit of advice stuck with me more than any other: He told us that, as researchers familiar with such best practices, we would occasionally have to educate journal editors who might have misconceptions.

Psychological science is catching up to Bob and the many other luminaries who have promoted improved practices over the past 60 years. As the field debates best practices and develops new tools to test our intuitions and to improve research methods and statistics, I hope that AMPPS will help researchers across the field better their own methods and research skills. I look forward to learning from the many authors and reviewers who will contribute to AMPPS.

1 “p-hacking” refers to many ways in which researchers might flexibly select analytical procedures to shift results from p > .05 to p < .05, capitalizing on researcher degrees of freedom and flexibility in analysis procedures that could inflate false positive rates (Simmons, Nelson, & Simonsohn, 2011). “HARKing” stands for Hypothesizing After Results are Known, treating what are actually unpredicted results as if they confirmed an a-priori hypothesis (Kerr, 1998).

Acknowledgments

Thanks to the following people for their helpful feedback and suggestions on this editorial statement: Sarah Brookhart, Anna Brown, Pam Davis-Kean, Randy Gallistel, Torrance Gloss, Ellen Hamaker, Alex Holcombe, Mickey Inzlicht, Alison Ledgerwood, Scott Lilienfeld, Steve Lindsay, Fred Oswald, Roddy Roediger, Victoria Savalei, Yuichi Shoda, Sanjay Srivastava, Jennifer Tackett, Simine Vazire, E.J. Wagenmakers, and Tracy Waldeck.

References

Chambers, C. D. (2013). Registered Reports: A new publishing initiative at Cortex. Cortex, 49, 609–610.

Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281–302.

Cohen, J. (1962). The statistical power of abnormal-social psychological research: A review. Journal of Abnormal and Social Psychology, 65, 145–153.

De Groot, A. D. (1956/2014). The meaning of “significance” for different types of research [translated and annotated by Eric-Jan Wagenmakers, Denny Borsboom, Josine Verhagen, Rogier Kievit, Marjan Bakker, Angelique Cramer, Dora Matzke, Don Mellenbergh, and Han LJ van der Maas]. Acta psychologica, 148, 188–194.

Gallistel, C. R. (2015, September). Bayes for beginners: Probability and likelihood. APS Observer. Retrieved from https://www.psychologicalscience.org/observer/bayes-for-beginners-probability-and-likelihood

Goldin-Meadow, S. (2016, October). Preregistration, replication, and nonexperimental studies. APS Observer. Retrieved from https://www.psychologicalscience.org/observer/preregistration-replication-and-nonexperimental-studies

Harlow, L. L. (2017). The making of Psychological Methods. Psychological Methods, 22, 1–5.

Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2, 196–217.

Lindsay, D. S. (2017). Preregistered direct replications in Psychological Science. Psychological Science, 28, 1191–1192.

Lindsay, D. S., Simons. D. J., & Lilienfeld, S. O. (2016, December). Research preregistration 101. APS Observer. Retrieved from 03https://www.psychologicalscience.org/observer/research-preregistration-101

Loevinger, J. (1957). Objective tests as instruments of psychological theory. Psychological Reports, 3, 635–694.

Meehl, P. E. (1967). Theory-testing in psychology and physics: A methodological paradox. Philosophy of Science, 34, 103–115.

Sharpe, D. (2013). Why the resistance to statistical innovations? Bridging the communication gap. Psychological Methods, 18, 572–582.

Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False positive psychology: Undisclosed flexibility in data colelction and analysis allows presenting anything as significant. Psychological Science, 22, 1359–1366.

Simons, D. J., Holcombe, A. O., & Spellman, B. A. (2014). An introduction to registered replication reports at perspectives on psychological science. Perspectives on Psychological Science, 9, 552–555.

Srivastava, S. (2012, September 27). A Pottery Barn rule for scientific journals. Retrieved from https://hardsci.wordpress.com/2012/09/27/a-pottery-barn-rule-for-scientific-journals/


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.