ESCOP Program

PRECONFERENCE EVENT
Building a Better
Psychological Science
Good Data Practices and Replicability
This program occurred on 28 August 2013

 

PROGRAM

Psychological science has come of age. But the rights of a mature discipline carry with it responsibilities: to maximize confidence in our findings through good data practice and replication while simultaneously ensuring that funding sources and publication outlets support best practice.

Leading experts explored the causes and extent of bias and error problems in science as well as potential solutions to these and other complex problems affecting replicability.

This symposium was co-sponsored by ESCoP and the Association for Psychological Science (APS).
For more on APS’s replication efforts please read: APS Journal Seeks Labs to Participate in First Registered Replication Report Project

C. Cacciari (University of Modena, Italy; ESCoP)

Welcome

 

Axel Cleeremans (Université Libre de Bruxelles, Belgium)

Truth or Hype? Shifting Values and Norms in (Psychological) Science

As Diederick Stapel himself explains in the book he recently published about his travails, both fraud as well as questionable research practices may stem from the progressive and ongoing transformation of academia into a business of sorts. When publication can yield monetary rewards, when the media ask for simplistic headlines, when your entire career appears to hinge on that paper in Science, there is indeed tremendous pressure to change the norm ever so slightly. But so begins a slippery slope in which the very ideal of science — finding what is true and what is false — begins to fade away.

 

Eric-Jan Wagenmakers (University of Amsterdam, The Netherlands)

The Excitement of Conducting a Replication Study

Replication studies have a bad rep. According to popular opinion, conducting a replication study is the academic equivalent of doing the laundry: an activity that is perhaps necessary, but intrinsically boring, and certainly nobody’s hobby. Popular opinion, however, is dead wrong here. Replication studies are exciting, and in fact more so than “regular” or “innovative” research. In the first part of this presentation I Wagenmakers will demonstrated by means of concrete examples how replication studies generate ample quantities of knowledge, hope, surprise, debate, media attention, anger, and fear. In the second part of this presentation I Wagenmakers will explained how the impact of a replication study can be maximized by proper design and analysis.

 

Hal E. Pashler (University of California, San Diego, USA)

Incentives and Practices to Promote Replicability

There is widespread agreement that to improve our science, we need less hyping of preliminary results, more efforts at replication, and better dissemination of replication results. Pashler argued that exhortation is unlikely to make much difference here, and that we should look honestly at the grave mismatch between current incentives (what people are actually rewarded for) and the behaviors that we want to see promoted. Only by changing incentives can we achieve major reform.  

 

Brian Nosek (University of Virginia, USA)

Scientific Utopia

How can existing scientific practices be improved to increase efficiency in the accumulation of knowledge, and the alignment between daily practices and the values of the academic community? Nosek discussed some present and possible futures of scientific communication and practices.

 

Discussant: Barbara A. Spellman (University of Virginia, USA)

Editor, Perspectives on Psychological Science

 

For more on APS’s replication efforts please read: APS Journal Seeks Labs to Participate in First Registered Replication Report Project