Presidential Column

Looking at Psychology Through the Lens of Metascience

magnifying glass and puzzle pieces

As psychological scientists, we think hard about the science we do. We formulate hypotheses and design studies. We observe our participants—the speed of button presses, fluctuations in blood pressure, the content of verbal reports—and we infer psychological meanings. I’d like to turn our focus to the process of science in general, which has been dubbed metascience. I mean, who can resist a little navel-gazing now and then?

In early September 2019, I attended a conference that encouraged a multidisciplinary study of how scientists do science. This metascientific effort (one among many)[1] considered diverse factors that influence the questions we choose to ask, the experiments we decide to run, the priors we harbor when interpreting the data, and the conclusions we draw. (For more details, see the cover feature by meeting co-organizer Jonathan Schooler.) A dedicated band of psychological scientists, economists, data scientists, historians, and philosophers put their heads together for a couple of days to tackle basic questions of how we conduct ourselves when doing science. Many fascinating topics and insightful observations were discussed; here are a couple of highlights:

  • Do you search for references with Google Scholar? This wondrous tool also influences what you read, which papers you cite, and therefore how you do science. The active ingredients of this influence are currently a mystery to us, however, because Google Scholar’s algorithms are not public (West, 2019).
  • When you see statistics in a published paper, do you read them as evidence or just persuasive storytelling? The evidence suggests that readers more often treat statistics as the latter than the former (Fidler, 2019). Perhaps this is one reason why people continue to trust findings that aren’t replicable (Yang, 2019).
  • Ever wonder why rival communities of researchers hold stable, mutually exclusive beliefs, despite access to exactly the same scientific findings? It turns out that mathematical models shed light on why these intense scientific polarizations persist. Here’s one reason: Each group distrusts the evidence that is taken as definitive in the other camp (O’Connor, 2019). I found this topic particularly captivating, given that one of my own areas of research—the nature of emotion—has been polarized for decades.

The credibility revolution (formerly the replication crisis) dominated discussion at the conference. This was not surprising, because metascience got a big shot in the arm from concerns over whether or not psychology is, in fact, in crisis. When reasonable people looked at the evidence regarding replication rates for published studies, they disagreed on its interpretation. Some scientists recoiled from what they saw as a hurricane of replication failures, while others dismissed the storm as an illusion drawn in with a black Sharpie, like Alabama on Trump’s hurricane map. But everyone agreed that some methods-related housekeeping was in order.

The metascience conference was ripe with ideas to prevent scientists from gaming the system to improve their careers. Humans are motivated animals, and science is a motivated human activity with rewards and penalties that shape its process and products. There was widespread agreement that, within the current scientific ecosystem, short-term financial and psychological incentives encourage the publication of research that is not ready for prime time. There was some disagreement, however, about whether methodological innovations derived from the credibility crisis could, on their own, substantially improve the quality of and confidence in our science.

The meeting organizers invited a panel of scientists (myself included) to discuss “reflections on metascience topics and findings.” I affectionately dubbed us “the curmudgeons panel.” Our job, as the official contrarians of the meeting, was to offer critical observations, kind of like a scientific Greek chorus. Here is a sample of my grumpy concerns:

Yes, it’s crucial for scientists to recruit large, representative samples, avoid questionable research practices such as p-hacking and HARKing (hypothesizing after results are known), and so on, but such improvements, while necessary, are not a sufficient course correction. Psychological science must do more than prevent bad methodological habits—we want to incentivize a stronger focus on longer-term scientific gains. I’d therefore like to see metascience investigations of how incentive structures influence our behavior, not just when we’re jumping through hoops to secure a job or a grant (Bergstrom, 2019), but also when we’re practicing the craft of science. Fortunately, psychological scientists know something about studying humans as they engage in motivated activities.

And make no mistake—science is a motivated practice, even when careers are not on the line. Psychological studies of motivation find that two people faced with exactly the same sense data from their surroundings can create very different experiences and behave in very different ways. What’s true for our study participants is also true for ourselves. Our judgments and behaviors are shaped in powerful ways by our learning histories, immediate versus long-term goals, expected effort and anticipated incentives, as well as a host of other factors.

I also suspect that the credibility revolution is a symptom of a deeper concern: that many psychological scientists hold outdated assumptions about what a mind is and how a mind works. If I’m right, then we face more than a crisis of method. We have a crisis of theory that makes our experiments more fallible and our findings less robust.

For example, psychological science largely assumes that the human mind is a sequence of independent, stable mental states, each caused by a discrete, universal process. So-called perceptual processes pass information to supposed cognitive processes, which battle with alleged emotional processes for control of behavior. This relay-race view of the mind encourages us to design experiments as a series of independent stimulus-response trials, and our most popular statistical methods also make independent trials a necessary condition for analysis. Scientists have questioned this ontological commitment since the 19th century (e.g., Dewey, 1896), and converging lines of evidence now strongly suggest that a mental event is not a discrete moment in time, but an evolving dynamic, in which behaviors and mental features in one moment both depend on what happened in the previous moment and form a context for what happens in the next (e.g., Hutchinson & Barrett, 2019; Rabinovich et al., 2015; Spivey, 2008). Laboratory experiments that sever one moment from the next may be replicable, but they may not generalize, meaning that they fail to move us closer to a real scientific understanding. Efforts to improve replicability may boost the rigor of stimulus-response methods, but they cannot address the question of whether those methods are appropriate in the first place. As a consequence, perhaps metascience might take up the issue of how our ontological commitments influence the methods we use and the experiments we construct. Historically, psychological scientists (Waller et al., 2006) and other fields of study (see footnote 1) have considered these issues, albeit in a less quantitative way.

Let’s face it: Science is hard, and predicting and explaining human behavior may be the hardest science of all. Moreover, science always involves a moral dilemma. If you generate a series of studies that are replicable by the best current scientific standards, do you stop there and publish, or do you explore until you inevitably uncover conditions where your observations do not hold (in another analysis, another social context, another cultural context, etc.)? This dilemma is intrinsic to any science, even one with a superior incentive structure. Good science is not about uncovering true facts—it is about quantifying the degree of doubt in a set of observations (Gee, 2013). Perhaps metascience can teach us how to navigate this dilemma with curiosity.

Science is a challenging endeavor and we are in it together. So let’s question everything, from our methods and statistical practices to the ontological commitments embedded in those practices. And who knows? Maybe a bit of formal navel-gazing, through the empirical lens of metascience, will finally usher forth the full, Kuhnian-style revolution that so many of us feel is needed.

[1] A number of excellent efforts examine the process of science, such as the Society for the Social Studies of Science, the History of Science Society, the Society for the History of Technology, the Philosophy of Science Association, the European Association for the Study of Science and Technology, and sections within the American Anthropological Association, the American Sociological Association, the American Political Science Association, and the National Women’s Studies Association.

References

Bergstrom, C. (2019, September). The inherent inefficiency of grant proposal competitions and the possible benefits of lotteries in allocating research funding. Talk presented at the Metascience 2019 Symposium, Palo Alto CA.

Dewey, J. (1896). The reflex arc concept in psychology. Psychological Review, 3, 357–370.

Fidler, F. (2019, September). Barriers to conducting replications–challenges or opportunities? Talk presented at the Metascience 2019 Symposium, Palo Alto CA.

Gee, H. (2013). The accidental species: Misunderstandings in human evolution. Chicago, IL: University of Chicago Press.

Hutchinson, J. B. & Barrett, L. F. (2019). The power of predictions: An emerging paradigm for psychological research. Current Directions in Psychological Science, 28, 280–291.

O’Connor, C. (2019, September). Scientific polarization. Talk presented at the Metascience 2019 Symposium, Palo Alto CA.

Rabinovich, M. I., Simmons, A. N., & Varona, P. (2015). Dynamical bridge between brain and mind. Trend in Cognitive Sciences, 19, 453–461.

Spivey, M. J. (2008). The continuity of mind. New York, NY: Oxford University Press.

Waller, N. G., Yonce, L. J., Grove, W. J., Faust, D., & Lenzenweger, M. E. (Eds.). (2006). A Paul Meehl reader: Essays on the practice of scientific psychology. Mahwah, NJ: Erlbaum.

West, J. (2019, September). Echo chambers in science? Talk presented at the Metascience 2019 Symposium, Palo Alto CA.

Yang, Y. (2019, September). The replicability of scientific findings using human and machine intelligence. Talk presented at the Metascience 2019 Symposium, Palo Alto CA.

Comments

I hit the wrong button and my comment was not completed. I tried to say that my comments and criticisms while in graduate school in my first paper or directed at theorists such as Clark Hull among others who used a limited experimental paradigm such as a simple maze and a rodent subject base then to generalize their findings to all species and all paradigms. Recall, Egon Brunswick’s experiment with the Mueller Lyar illusion and which he found the reversal of the effect from the laboratory want he extended it to the real world. I believe this study was done in 1940. There were many other practical studies in the 50s and 60s which showed the value of using “functional context” both for educational value in literacy and training applications an industry and in the military. it seems we are still fighting that same problem I’m generalizing from laboratory experiments restricted with college students or rats which are preventing us from broadening our scientific understanding of cognition, behavior, and emotion within an intact conscious organism.

I apologize for some of the imperfections in this antiquated speech recognition system.

Thanks for this reflection on (psychology as a) science. The third of the three highlights mentioned in the paragraphs of your of treatise reminded me on Bruno Latour’s (1987) book: ”Science in action”. I read that when I had still to experience how much the publication arena in science (psychology in my case) resembles a chicken coop. Scientists’ motivation – in the course of a few years practice – becomes centered about prestige and territory under the cover of searching for truth.
Bruno Latour (1987). Science in action: how to follow scientists and engineers through society, Cambridge (Mass.), Harvard university press.

The final aim of science, and also of psychology, is theory-building. Statistical control and replication are only tools of a theory´s evaluation. We should read or reread the publications by Paul Meehl. Where are our theories beyond the idea of something non-random? How to formulate what could be named “theory”? An inductive approach – estimation from data – is never sufficient doing science, as we all know. The crucial point of all our problems comes from the non-existence of more precise theories. All that was said by Paul Meehl, nothing new, but totally ignored. We should enter a period of deductive approaches, because we know the culdesac of the induction period.


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.