First Person

Student Notebook: Informal Laboratory Practices in Developmental Psychology

We’ve all heard about the replication crisis in psychology. Proposed reasons range from small effect sizes and confounding variables to improper statistical techniques. Research questions with small effect sizes are challenging to pursue, even with the most rigorously designed studies. Confounding variables not accounted for or discussed in the paper may affect the results and hamper replication attempts (Yong, 2018). Misuse of statistical techniques, or arbitrary analytical decisions not directly related to the original question, may also play a role (Simmons, Nelson, & Simonsohn, 2011). And of course, there’s the issue of publication bias, meaning null results don’t get published and end up in a file drawer, never to make it into the mainstream conversation. However, there is another important issue that may affect replication attempts: The informal laboratory practices that don’t make it into the published paper.

Informal Laboratory Practices

Informal laboratory practices may influence study design and data collection, but they aren’t written in articles or handbooks. These practices are passed on from advisor to student, or between colleagues, and lack of communication about these practices may contribute to researchers’ inability to replicate others’ results. Researchers at the University of Groningen in the Netherlands interviewed experimental psychology researchers about what informal practices they used, and how they viewed the importance of these practices in research (Brenninkmeijer, Derksen, & Rietzschel, 2019).

Brenninkmeijer et al. (2019) identified informal practices used in studies with adults. However, they did not interview developmental researchers. The informal research practices used when working with children can be vastly different from those used with adults. Infant methods are constrained by attention, parent schedules, and physical ability. Methods used with adolescents can have similar constraints at differing levels (e.g., attention and parent schedules) and some unique ones as well (e.g., mischievous behavior on study tasks). These differences inherent in developmental research affect the informal practices used and are dependent on the developmental period.

Professionalism: Schools, Parents, and Participants

When conducting research with children, collecting data means interacting with the participants, their parents, and their schools. Being professional and friendly is important to consider with parents and school staff. This is an ethical necessity, but it also aids in sparking parents’ and teachers’ interest in research surrounding their children’s development, leading to higher participation rates. With the children themselves, balancing professionalism and friendliness must change with development. As the age of the participant rises, so must the researcher’s professionalism to keep the participants focused on the task.

Whether professionalism affects the data itself is up for debate; however, it is important for a lab’s reputation. Allaying caretakers’ anxieties is important for developmental researchers. This can be achieved by allowing parents to supervise their child’s participation, providing US Food and Drug Administration (FDA) facts pages on MRI safety, and being open and honest about the study and procedures. Developmental scientists rely on repeated visits to schools, museums, and other public spaces to collect data. Whether professionalism practices influence data is unclear, but they certainly influence future data collection in those spaces. In any case, these practices are not specified in research articles, and are chosen largely on the basis of intuition, leading to different practices between research groups.

Production of Good Developmental Data

Creating a task for children involves consideration of children’s attentional abilities and interests. It is important to keep tasks short, the task length gradually increasing with development. When this is not considered—and even occasionally when it is—participants will quit the study early because of inattention and boredom regardless of age. Preschoolers may quit a study by walking away; similarly, teenagers may essentially quit a study by randomly selecting buttons to move the task forward.

As with adult studies, the study script is important for the clarity of a task, but with children, it also needs to be fun. Scripts do best when they create a narrative around a game-like task. For example, when leading children into an eye tracker, some researchers will pretend as though they’re taking the children into a spaceship to capture their interest. Similar methods also must be used with teenagers with game-like tasks, scaling up complexity to account for their increasing cognitive abilities and maturity.

Open Science in a Developmental Context

When discussing informal scientific practices and how they might contribute to the replication crisis, open science naturally comes to mind. OS includes ideas such as preregistering experiments, making your data and analysis code public, and having open-access journals (Gezelter, 2009). The informal practices discussed here aren’t things we elaborate on in journal articles, but they could be affecting our results and exacerbating the replication crisis. Explicitly stating all informal practices is not possible because many are things we might not even think about. Whether we should begin to include them or study them in an empirical manner are options open to researchers.

Replication attempts may be affected by differing informal practices between labs; consequently, the replication crisis will not be quickly resolved. Aside from the file drawer problem, and the emphasis on novel publications for securing a good postdoctoral position or tenure, replication is a particularly difficult issue because of the pace of data collection. As in data collection from clinical or other hard-to-reach populations, developmental data collection takes a lot of time, which puts pressure on researchers who may already be overburdened. One solution is increased collaboration to take the strain off any one researcher. An example of this is the ManyBabies Project (Bergmann et al., 2016). Collaboration of this kind allows not only for faster data collection but also for increased sample variability in terms of geographic location. This isn’t the end of the issue, but it is the beginning of the solution.

Many pieces of developmental psychology make OS practices difficult, but there are still many that can be done. Making data public might be difficult, but other practices, such as disseminating research, can and should always be done. Making science accessible to the public, such as in blogs or news articles and through outreach to schools and educational events can teach people what scientists are learning and foster a more open relationship between researchers and the community. Adopting these practices also gives us structure and transparency between the scientific community and the public, which is needed to ensure we are conducting the most rigorous scientific studies we can. Open science is not an all-or-nothing game; it is instead a philosophy to follow that encourages us to do everything we can to conduct rigorous research.

Informal practices in developmental psychology, and in psychology in general, are a pervasive and relatively untouched potential contributor to the replication crisis. Printing complete study scripts, describing study environments, and detailing data-collection procedures in appendices or on a preregistration page may help to ameliorate some of the confounds that come from differing informal research practices. Brenninkmeijer (2019) and colleagues have published an important first step into making these informal laboratory practices explicit and known. Now, it is up to us as researchers to make sure we continue to be as open and transparent as possible to help stem the tide of the replication crisis.

Acknowledgments: Special thanks to Danielle Smith, João Guassi Moreira, and Emily Neer for their input.

References

Bergmann, C., Frank, M. C., Gonzalez, N., Bergelson, E., Cristia, A., Ferguson, B., … Byers-Heinlein, K. (2016, July 1). ManyBabies. Retrieved from https://osf.io/rpw6d/

Brenninkmeijer, J., Derksen, M., & Rietzschel, E. (2019). Informal laboratory practices in psychology. Collabra: Psychology, 5(1), 1–13. https://doi.org/10.1525/collabra.221

Gezelter, D. (2009). What, exactly, is open science? Retrieved from http://openscience.org/what-exactly-is-open-science/

Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 1359–1366. https://doi.org/10.1177/0956797611417632

Yong, E. (2018, November 20). Psychology’s replication crisis is running out of excuses. Retrieved from https://www.psychologicalscience.org/news/psychologys-replication-crisis-is-running-out-of-excuses.html


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.