In the spirit of the “March Madness” college basketball tournament in the US, the Observer showcases the latest methodological innovations in the psychological research playbook.
It’s been 35 years since psychological scientist and APS James McKeen Cattell Fellow Robert Rosenthal coined the phrase “the file drawer effect” to describe scientific studies that never get reported — mainly because they don’t confirm original hypotheses. The outcome, he argued, was an editorial bias toward publishing only findings that support an underlying thesis, even when other studies yield contradictory or inconclusive results.
This effect stretches across a variety of scientific disciplines, and psychology is in no way immune. But in recent years, shortcomings in psychological science have become much more pervasive and pressing, ranging from small sample sizes to poor data documentation to a lack of reproducibility, many researchers in the field contend.
“Journals, including our field’s top journals, are littered with underpowered studies, p < .05 fishing expeditions, and clever or ‘sexy’ conclusions that are not justified by the data — these studies get published as long as they conform to aesthetic norms and yield clever and/or attention-getting headlines,” E. David Klonsky, director of the Personality, Emotion, and Behaviour Lab at the University of British Columbia, wrote in a recent contribution to the Observer. “I’ve often told friends and colleagues, ‘Show me five studies in our field’s top psychological science journal, and I’ll show you four with conclusions that can’t be trusted.’”
APS Board Member Lisa Feldman Barrett suggests that the growing potential for media coverage may be fueling scientists’ inclination to magnify their empirical results.
“Scientists now have competing goals,” Barrett wrote in a 2012 essay for the Observer. “One is to publish work that is newsworthy (e.g., to be mentioned in The New York Times science section). A second is to publish work that is theoretically important and makes a significant contribution to the scientific question at hand. These are not necessarily the same, and so should not be confused. But they often are.”
Barrett and Klonsky are among a growing number of psychological scientists who are eager to see the profession raise research practices and publication standards to a new level of reliability. Now, APS and leaders in the field are meeting that challenge. They are spearheading efforts to bolster methodological integrity by promoting open research practices, enhanced methodology reporting, and incentives for study replications. These initiatives include new, more rigorous reporting standards at Psychological Science and the new Registered Replication Report initiative launched last year by Perspectives on Psychological Science. This new type of report is designed to give researchers more incentive to pursue replications — by publishing the studies regardless of the outcome. In the current issue of the Observer, experts flesh out the details of these new initiatives for a stronger science.
By Brian A. Nosek
By Alison Ledgerwood
By Daniel J. Simons and Alex O. Holcombe
By Jon Grahe, Mark Brandt, Hans IJzerman, and Johanna Cohoon
By Geoff Cumming
By Matthias R. Mehl