Seven How-To Guides to Support Research Practices  

Icons representing different types of information flow to and from a person's brain to a laptop with scientific research on the screen.

In 2024, psychological scientists proposed a range of innovative developments in research practices across the field. Here’s a look at a list of guides, tutorials, and manuals that are designed to support researchers as they expand their toolboxes of research practices and methods. These resources were published in 2024, or are soon to be published, in the APS open-access journal Advances in Methods and Practices in Psychological Science. They are not listed in any particular order. 

1. Validity and Transparency in Quantifying Open-Ended Data 

Clare Conry-Murray, Tal Waltzer, Fiona DeBernardi, et al. (2024) 

This research team provides strategies for improving validity and reliability of coding open-ended data and investigates questionable research practices when quantitatively coding open-ended data. The paper also proposes guidelines for transparent reporting, informed by concerns with replicability, content validity, and statistical validity. They provide a preregistration template to help facilitate transparent and valid coding of open-ended data. 

Resource: Read the paper for guidelines and a template. 

2. Preprocessing ESM Data: A Step-By-Step Framework, Tutorial Website, R Package, and Reporting Templates 

Jordan Revol, Chiara Carlier, Ginette Lafit, et al. (2024) 

Experience-sampling-method (ESM) studies have become a popular tool to gain insight into the dynamics of psychological processes. Although the statistical modeling of ESM data has been widely studied, the preprocessing steps that precede such modeling have received relatively limited attention despite being a challenging phase. To support researchers in properly preprocessing ESM data, Revol and colleagues present a step-by-step framework, a tutorial website that provides a gallery of R code, an R package, and templates to report the preprocessing steps.  

Resources:   

  • View the templates to help guide preprocessing.  

3. A Tutorial on Tailored Simulation-Based Sample Size Planning for Experimental Designs with Generalized Linear Mixed Models  

Florian Pargent, Timo Koch, Anne-Kathrin Kleine, et al. (2024) 

Generalized linear mixed models (GLMMs) offer a flexible statistical framework to analyze experimental data with complex data structures. However, available methods and software for a priori sample-size planning for GLMMs are often limited to specific designs. Tailored data-simulation approaches offer a more flexible alternative. Based on a practical case study, this paper provides a step-by-step tutorial and corresponding code for conducting tailored a priori sample-size planning with GLMMs. The team focuses on power analysis and also explains how to use the precision of parameter estimates to determine appropriate sample sizes. They conclude with an outlook on the increasing importance of simulation-based sample-size planning. 

Resources: View the tutorial and corresponding code.  

4. A Guide to Prototype Analyses in Cross-Cultural Research: Purpose, Advantages, and Risks 

Yuning Sun, Elaine Louise Kinsella, and Eric Igou (2024) 

The prototype approach provides a theoretically supported basis for novel research, detailing “typical” cognitive representations of groups, experiences, and other areas of focus. Prototype analyses are flexible enough to allow for the identification of both universal and culture-specific elements, offering a comprehensive and nuanced understanding of a concept. The researchers highlight theoretical, empirical, and practical reasons why prototype analyses offer an important tool in cross-cultural and interdisciplinary research while also addressing the potential for reducing construct bias in research that spans multiple cultural contexts. 

Resources: Read the guide to prototype analyses.  

5. Measuring Variation in Gaze Following Across Communities, Ages, and Individuals — A Showcase of the TANGO–CC 

Julia Prein, Florian Markus Bednarski, Ardain Dzabatou, et al. (in press) 

This paper describes a gaze following task designed to measure basic social cognition across individuals, ages, and communities called TANGO-CC. This tool can be used to assess social cognition in diverse communities, and the researchers describe it as a road map that can be sed to document community and individual-level differences across cultures. The task was developed and assessed in one cultural setting and then adapted for cross-cultural data collection. 

Resource: Visit the open-source website for researchers to customize and use the TANGO-CC task. 

6. dockerHDDM: A User-Friendly Environment for Bayesian Hierarchical Drift-Diffusion Modeling  

Wanke Pan, Haiyang Geng, Lei Zhang, et al. (in press) 

Drift diffusion models (DDMs) are pivotal in understanding evidence accumulation decision-making processes across psychology, behavioral economics, neuroscience, and psychiatry. Hierarchical drift diffusion models (HDDM), a Python library for hierarchical Bayesian estimation of DDMs, has been widely used among researchers. Pan and their colleagues present dockerHDDM, a user-friend computational environment for HDDM with new features that address common issues of compatibility and lack of support for Bayesian modeling functionalities. This tutorial serves as a practical, hands-on guide for researchers to leverage dockerHDDM’s capabilities in conducting efficient Bayesian hierarchical analysis of DDMs. 

Resources: 

  • View the dockerHDDM installation and usage guide video

7. VALID: A Checklist-Based Approach for Improving Validity in Psychological Research 

Susanne Kerschbaumer, Martin Voracek, Balazs Aczel, et al. (in press) 

High validity is crucial for obtaining replicable and robust study outcomes, both when exploring new questions and when replicating previous findings. Kerschbaumer and colleagues aimed to address this issue by developing a comprehensive checklist to assist researchers in enhancing and monitoring the validity of their research. The VALID checklist is an accessible website that provides researchers with an adaptable, versatile tool to monitor and improve the validity of their research and to suit their specific needs. By focusing on adaptiveness during its development, VALID encompasses 331 unique checklist versions, making it a one-stop solution suitable for a wide range of projects, designs, and requirements. 

Resource: Visit the VALID checklist website.  

Feedback on this article? Email [email protected] or login to comment.


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.