Logo for the journal AMPPS

New Content from Advances in Methods and Practices in Psychological Science

Assessing Ego-Centered Social Networks in formr: A Tutorial
Louisa M. Reins, Ruben C. Arslan, and Tanja M. Gerlach

In this tutorial, Reins and colleagues give researchers detailed instructions about how to set up a study involving ego-centered social networks online using the open-source software formr. This software provides one way of studying ego-centered social networks, in which a focal individual reports the people they interact with in specific contexts, the attributes of these people, and their relationship with them. This tutorial includes a study template for the assessment of social networks, which may help researchers from different backgrounds to collect social-network data.

A Multilab Study of Bilingual Infants: Exploring the Preference for Infant-Directed Speech
Krista Byers-Heinlein et al.

Labs in 17 countries investigated bilingual and monolingual infants’ preference for North American English (NAE) infant-directed speech (“baby talk”), compared with NAE adult-directed speech. Both monolingual and bilingual infants (separate samples aged 6–9 months and 12–15 months) preferred infant-directed to adult-directed speech. Bilinguals acquiring NAE as a native language revealed a stronger preference for infant-directed speech when they were more exposed to NAE, which is similar to what previous studies have found with monolinguals. These findings indicate that preference for infant-directed speech might make similar contributions to the development of monolingual and bilingual development.

Experiment-Wise Type I Error Control: A Focus on 2 × 2 Designs
Andrew V. Frane

Frane discusses inflation of Type I errors (i.e., the rejection of a true null hypothesis) in 2 × 2 factorial designs. Experiment-wise Type I error rate (EWER) is the a priori probability of at least one Type I error occurring in the tests of interest in an experiment, which can be considerably higher than the alpha level chosen for the statistical test (conventionally, .05) in experiments with multiple comparisons. The author uses simulations to evaluate various approaches and shows that conventional approaches often do not control for EWER, but other methods might be more appropriate (e.g., simulation-based adjustment, Hommel procedure).

A Cautionary Note on Estimating Effect Size
Don van den Bergh, Julia M. Haaf, Alexander Ly, Jeffrey N. Rouder, and Eric-Jan Wagenmakers

Estimating the size of an effect has become a popular approach to statistical inference, and researchers now commonly report these effect sizes. However, van den Bergh and colleagues explain, focusing on effect size assumes that there is an effect and ignores the null hypothesis of an absent effect. This “null-hypothesis neglect” can result in overestimated effect sizes. To address this, van den Bergh and colleagues propose a spike-and-slab model that incorporates the plausibility of the null hypothesis into the estimation of effect sizes.

An Introduction to Linear Mixed-Effects Modeling in R
Violet A. Brown

This tutorial is a complete guide for researchers who have no experience implementing mixed-effects models (also referred to as multilevel modeling) in R. Brown avoids terminology beyond what a researcher would learn in a standard graduate-level statistics course but provides references for individuals interested in learning more. The data and R script used to build the models she describes are available via OSF at https://osf.io/v6qag/.


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Comments will be moderated. For more information, please see our Community Guidelines.

Please login with your APS account to comment.