New Content from Advances in Methods and Practices in Psychological Science

Logo for the journal AMPPS

The Failings of Conventional Mediation Analysis and a Design-Based Alternative
John G. Bullock and Donald P. Green

Mediation analysis quantifies the extent to which a variable participates in the outcomes of a treatment. Bullock and Green explain how the common way of measuring mediation by which outcomes are regressed on treatments and mediators to assess direct and indirect effects—measurement-of-mediation analysis—is flawed. The researchers propose that scholars instead use an approach rooted in experimental design. In implicit-mediation analysis, features of the treatment are added and subtracted in ways that implicate certain mediators and not others. The researchers describe this approach and the statistical procedures implied, and they illustrate it with examples from recent literature.

The Role of Human Fallibility in Psychological Research: A Survey of Mistakes in Data Management
Marton Kovacs, Rink Hoekstra, and Balazs Aczel

Data management is not immune to human error. Kovacs and colleagues surveyed 488 researchers about the type, frequency, seriousness, and outcome of mistakes made by their research team in the last 5 years. Most researchers indicated low frequency of errors. The most frequent errors led only to minor consequences, such as time loss. However, the most serious mistakes, albeit rare, led to moderate consequences, such as affecting some conclusions for almost half of researchers. Most frequent mistakes were attributed to poor project preparation or management and/or personal difficulties.

SampleSizePlanner: A Tool to Estimate and Justify Sample Size for Two-Group Studies
Marton Kovacs, Don van Ravenzwaaij, Rink Hoekstra, and Balazs Aczel

In this tutorial, Kovacs and colleagues introduce a web app (Shiny app) and R package that allows researchers to estimate and justify the sample sizes needed for well-powered studies of two independent groups. This tool offers nine different procedures to determine the sample size for independent two-group studies and highlights the most important decision points for each procedure, suggesting example justifications for each decision. Thus, the tool also helps researchers report and justify their sample-size choices. The Shiny app is available on https://martonbalazskovacs.shinyapps.io/SampleSizePlanner, and more information is available at https://github.com/marton-balazs-kovacs/SampleSizePlanner or https://marton-balazs-kovacs.github.io/SampleSizePlanner/.

A Conceptual Framework for Investigating and Mitigating Machine-Learning Measurement Bias (MLMB) in Psychological Assessment
Louis Tay, Sang Eun Woo, Louis Hickman, Brandon M. Booth, and Sidney D’Mello

Machine-Learning Measurement Bias (MLMB) can occur when a trained machine-learning model produces different predicted scores or score accuracy for different subgroups (e.g., race, gender) despite examining the same levels of the underlying construct (e.g., personality) in the groups. Both biased data and algorithms can be the sources of MLMB. Tay and colleagues explain how these potential sources of bias may manifest and develop some ideas about how to mitigate them. The authors also highlight the need to develop new statistical and algorithm procedures and put forward a framework for clarifying, investigating, and mitigating these complex biases.

Caution, Preprint! Brief Explanations Allow Nonscientists to Differentiate Between Preprints and Peer-Reviewed Journal Articles
Tobias Wingen, Jana B. Berkessel, and Simone Dohle

Across five studies in Germany and the United States, Wingen and colleagues found that nonscientists tend to perceive research findings published as preprints (articles that have not yet been peer reviewed and thus have not undergone the established scientific quality-control process) as equally credible as findings published as peer-reviewed articles. However, an explanation of the peer-review process appeared to reduce the credibility of preprints. The researchers suggest adding a brief explanation of the peer-review concept to preprints. They say this can address concerns about public overconfidence in preprints while still fostering faster science communication and other preprint benefits.

PsyBuilder: An Open-Source, Cross-Platform Graphical Experiment Builder for Psychtoolbox With Built-In Performance Optimization
Zhicheng Lin, Zhe Yang, Chengzhi Feng, and Yang Zhang

In this tutorial, Lin and colleagues present the general-purpose graphical experiment builder—PsyBuilder—that they developed for Psychtoolbox, an open-source software package for stimulus presentation and response collection that otherwise requires coding. With PsyBuilder, both new and experienced users can graphically implement sophisticated experimental tasks through intuitive drag and drop without the need to write code. PsyBuilder’s output codes have built-in optimized timing precision and come with detailed comments to facilitate customization. Lin and colleagues describe the PsyBuilder interface and walk the reader through the graphical building process using a concrete experiment.

Feedback on this article? Email apsobserver@psychologicalscience.org or login to comment.


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.