A forthcoming special section of Perspectives on Psychological Science offers researchers concrete suggestions for navigating changing standards and improving the informational value of their research.
Psychological science is in the midst of a sea change. Over the last few years, our field’s confidence in the status quo has been shaken by a number of largely unrelated events that happened to coincide — Daryl Bem’s controversial 2012 paper purporting to show evidence for precognition, or the ability to feel the future; Jonah Lehrer’s widely read New Yorker article on the effects of publication bias in science; the research fraud case against psychologist Diederik Stapel; a rising concern about direct replication; and the publication of several troubling critiques of current practices in research and publishing.
Although the ensuing crisis of confidence was by no means the first that psychological science has faced, it seems to have resonated more widely and deeply. The convergence of events within our field was situated within a broader context of similar issues emerging across a range of scientific disciplines, from cancer research to genetics to neuroscience. Meanwhile, online communication, media attention, and a series of conference symposia and journal issues kept the critiques and concerns front and center.
The first wave of responses to the sense of crisis understandably focused on problems — many of which had been raised before and even repeatedly, but which in this new context seemed more urgently and insistently to demand the field’s consideration. A chorus of critiques focused our attention on the issues of publication bias, underpowered studies, p-hacking, replication, and questionable research practices. Some embraced these critiques wholeheartedly, while others pushed back, arguing that some of the problems were overstated or oversimplified.
Importantly, this first wave of responses was loud enough and big enough to overcome the inevitable inertia of an existing system, propelling the field into forward motion. We can and surely should debate which problems are most pressing, and which solutions most suitable. But at this point, we can all agree that there are some real problems with the status quo. Many researchers feel poised to change their current practices in an effort to improve our science. And already, new initiatives and journal policies have started moving the field forward to meet some of the recently articulated challenges head on.
It is in many ways an exciting time: Our momentum has placed psychological science at the forefront of a broader movement to improve standards and practices across scientific disciplines. But of course, change also involves uncertainty. For the average researcher or student standing on the shifting sands of new journal policies, conflicting reviewer standards, and ongoing debates about best practices, the view can seem rather chaotic. Do you really need to triple all your sample sizes? What should you do with that study you ran last year with a cell size of 20? Are covariates the devil incarnate or a useful tool for increasing statistical power? Is it ever okay to peek at your data? And when you submit a manuscript, are the reviewers going to doubt your findings for being too perfect, or for being not perfect enough?
An upcoming a special section of Perspectives on Psychological Science will bring together a collection of articles that seek to answer some of these questions. The goal is to provide researchers with a concrete set of practical best practices — that is, things we can change right now about the way we conduct and evaluate research that will make our science better. From methods for ethical data-peeking to guidelines for adequately powering studies, the articles set forth a toolbox of cutting-edge strategies to improve the informational value of new research, evaluate what we can learn from past work, and ensure that we are making our science a cumulative enterprise that converges on truth. So if you’ve been watching these conversations and debates unfold while wondering what on earth to actually do about them — stay tuned. We’re working on the navigation system.
References and Further Reading
Chambers, C., & Munafo, M. (2013, June 5). Trust in science would be improved by study pre-registration. The Guardian. Retrieved from http://www.theguardian.com/science/blog/2013/jun/05/trust-in-science-study-pre-registration
Eich, E. (2014). Business not as usual. Psychological Science, 25, 3–6.
Fiedler, K., Kutzner, F., & Krueger, J. I. (2012). The long way from a-error control to validity proper: Problems with a short-sighted false-positive debate. Perspectives on Psychological Science, 7, 661–669.
LeBel, E. P., Borsboom, D., Giner-Sorolla, R., Hasselman, F., Peters, K. R., Ratliff, K. A., & Smith, C. T. (2013). PsychDisclosure.org: Grassroots support for reforming reporting standards in psychology. Perspectives on Psychological Science, 8, 424–432.
Ledgerwood, A., & Sherman, J. W. (2012). Short, sweet, and problematic? The rise of the short report in psychological science. Perspectives on Psychological Science, 7, 60–66.
Lehrer, J. (2010, December 13). The truth wears off: Is there something wrong with the scientific method? The New Yorker.
Open Science Framework. (2014, January 9). Badges to acknowledge open practices [Web log post]. Retrieved from https://osf.io/tvyxz/wiki/view
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359–1366.
Spellman, B. A. (2013, September). Research revolution 2.0: Whence and whither? The view from a journal editor. In Best Practices in Social Experimental Psychology. Symposium conducted at the annual meeting of the Society of Experimental Social Psychology, Berkeley, California.
Stroebe, W., & Strack, F. (2014). The alleged crisis and the illusion of exact replication. Perspectives on Psychological Science, 9, 59–71.
Vul, E., Harris, C., Winkielman, P., & Pashler, H. (2009). Puzzlingly high correlations in fMRI studies of emotion, personality, and social cognition. Perspectives on Psychological Science, 4, 274–290.