Cover Story

The field of metascience has gained increasing momentum in recent years as concerns about research reproducibility have fueled a larger vision of how the lens of science can be directed toward the scientific process itself. Metascience, also known as metaresearch or the science of science, attempts to use quantifiable scientific methods to elucidate how science works and why it sometimes fails.

Metascience has its roots in the philosophy of science and the study of scientific methods. However, it is distinguished from the former by its reliance on quantitative analysis and from the latter by its broad focus on the general factors that contribute to all aspects of the scientific process. Metascience also draws on the more narrowly defined fields of journalology, which studies the academic publishing process, and scientometrics, which uses bibliographic data in scientific publications to understand the impact of research articles.

Coming Together to Study Science

In September, a symposium on metascience (metascience2019.org), funded by the Fetzer Franklin Fund and held at Stanford University, brought together nearly 500 attendees to help consolidate the field. The symposium included over 50 speakers from a remarkable variety of scientific disciplines, including psychology, philosophy, biology, sociology, network science, economics informatics, quantitative methodology, history, statistics, political science, medicine, business, and chemical and biological engineering. I organized the event with APS Fellows Brian Nosek (University of Virginia) and Jon Krosnick at Stanford, psychological scientist Leif D. Nelson of University of California, Berkeley, and Fetzer Franklin Fund director Jan Walleczek. Among the speakers were APS President Lisa Feldman Barrett (Northeastern University) and APS Past Board Member Simine Vazire (University of California, Davis). The symposium also included three discussion panels involving journalists, representatives of assorted funding agencies, and scientists who have been critical of some aspects of the so-called replicability crisis.

The meeting addressed pressing questions surrounding the issue of scientific reproducibility including: “What is replication and its impact and value?” and “How are statistics, methods, and measurement practices affecting our capacity to identify robust findings?” However, it broadened the discussion to address a host of other aspects of the scientific process, such as “How do scientists generate ideas?” “How do scientists interpret and treat evidence?” and “What are the cultures and norms of science?” By contextualizing issues of reproducibility within the larger framework of investigating the scientific process, the metascience meeting illustrated how science is not so much in crisis as it is taking on the broader mantle of understanding and refining the scientific method.

The Stanford metascience meeting demonstrated the fundamentally interdisciplinary nature of the field. As metascientific studies have shown, interdisciplinary efforts sometimes build bridges and other times fall between the cracks. But the meeting illustrated how scientists across domains, united by shared interests, can converse about the common elements underpinning the scientific process. Although researchers seem largely in agreement regarding the value of metascience, they nevertheless have significantly disparate assessments of some of the pressing questions that metascience faces. For example, whereas some view reproducibility problems as in dire need of rectification, others see them as within the bounds of acceptability and, in most cases, naturally self-correcting.

In all this, the centrality of psychological science is unmistakable. Clearly some of our field’s role has stemmed from the challenges that psychological science itself has faced. Problems in replication, notorious examples of fraud, and published evidence for improbable claims have all contributed to psychological scientists’ motivation to take metascience head on.  Such challenges have provided impetus for psychological scientists to foster open science registration, engage in large-scale replication projects, and develop approaches for understanding how scientists can unwittingly report questionable findings.

The Psychology of Scientists

In many respects, metascience entails understanding the psychology of scientists themselves. Both the psychological assets and liabilities of scientists are central to how science is carried out.  For example, deciphering the process underpinning creativity is central to understanding how scientific ideas are generated, as my colleagues APS Fellow Shelly L. Gable and Elizabeth A. Hopper (UC, Santa Barbara) recently demonstrated in a study that indicated that writers and physicists are more likely to have ideas that overcome impasses while mind-wandering.

Conceptualizing human reasoning is critical to delineating the scientific method, APS William James Fellow John Anderson (Carnegie Mellon University) and APS Fellow Christian D. Schunn (University of Pittsburgh) pointed out 20 years ago. Science educator Anton E. Lawson said that human memory has to be deciphered to understand how scientists accumulate knowledge and develop scientific theories. Psychological processes also contribute to many of the challenges that scientists face. Researchers such as APS William James Fellow Anthony Greenwald (University of Washington) have talked about confirmation bias influencing scientists’ tendency to selectively report evidence that supports their hypotheses. Greenwald also found evidence of implicit bias contributing to scientists’ decisions on which colleagues’ work to cite in their own published research. Indeed, scores of other psychological factors — ranging from how individuals respond to rewards to how dominance hierarchies are arranged — are likely to play key roles in the unfolding of science. If the psychology of scientists influences how science is carried out, then it stands to reason that psychological science will be central to metascience.

Metascience Meets the Mainstream

One criticism of the metascience meeting involved its subtitle: “the emerging field of research on the scientific process.” Some viewed this characterization as overlooking the many lines of work on this general topic that have been carried out for decades by people such as Stanford physician-researcher John P. A. Ioannidis. Although it is certainly true that research that could be characterized as metascience has been conducted for years, the consolidation and centrality of this field is arguably a recent development. Whereas specialized scientists such as Ioannidis have been discussing problems with scientific reproducibility for some time, the mainstream research community has only recently thas taken note of this challenge only recently. Furthermore, while independent lines of work have been carried out across disciplines, the consolidation of these areas into an overarching field has been limited. Thus, although it might be misleading to characterize the field of metascience as “emerging,” it certainly is consolidating and gaining momentum as never before.

The increasing role of metascience in science holds both great promise and some risk.  Already its influence can be seen in the growing proportion of studies that are preregistered, as well as many journals’ adoption of badges for preregistration and the sharing of data and materials. In addition, many scientists now understand how the previously common practice of combing through a new data set to find a “good story” and then reframing the results to tell that story can potentially lead to erroneous conclusions. The growing salience of metascience in the field is in many respects like holding a mirror up to science and the scientists who conduct it. On the one hand, exposure to a mirror is known to enhance conscientiousness, and indeed it seems likely that the emergence of metascientific concerns may be encouraging scientists to be more disciplined in the way they conduct their research. However, mirrors can also make people self-conscious, and it seems plausible that scrutiny of the scientific process could (at least sometimes) stifle scientific creativity and risk-taking.

This is, of course, a metascientific hypothesis that itself might be profitably explored, for example, by evaluating the impact of preregistration on the creativity and risk-taking of scientists. Unquestionably, when metascience is used as a platform for making attacks on the credibility of researchers whose work has failed to be replicated, both science in general and metascience in particular are bound to suffer indignities.

For better or worse, the metascience genie is out of the bottle. The zeitgeist is shifting. As metascience takes on an increasingly central role in science, it remains to be seen what discoveries it will make and what impact it will have. Nevertheless, it seems certain that new generations of scientists will face greater scrutiny while also benefiting from a deeper understanding of the scientific process.

Recommended Reading

Beaman, A. L., Klentz, B., Diener, E., & Svanum, S. (1979). Self-awareness and transgression in children: Two field studies. Journal of Personality and Social Psychology, 37, 1835–1846. https://doi.org/10.1037/0022-3514.37.10.1835

Bem, D. J. (2011). Feeling the future: Experimental evidence for anomalous retroactive influences on cognition and affect. Journal of Personality and Social Psychology100, 407–425. https://doi.org/10.1037/a0021524

Carey, B. (2011, November 3). Fraud case seen as a red flag for psychology research. The New York Times. Retrieved from https://www.nytimes.com/2011/11/03/health/research/noted-dutch-psychologist-stapel-accused-of-research-fraud.html

Doyen, S., Klein, O., Pichon, C.L., Cleeremans, A. (2012). Behavioral priming: It’s all in the mind, but whose mind? PLOS ONE, 7(1): Article e29081.
https://doi.org/10.1371/journal.pone.0029081

Fanelli, D. (2018). Is science really facing a reproducibility crisis, and do we need it to? Proceedings of the National Academy of Sciences, USA, 115, 2628–2631.
https://doi.org/10.1073/pnas.1708272114

Gable, S. L., Hopper, E. A., & Schooler, J. W. (2019). When the muses strike: Creative ideas of physicists and writers routinely occur during mind wandering. Psychological Science, 30, 396–404.
https://doi.org/10.1177/0956797618820626

Gilbert, D. T., King, G., Pettigrew, S., & Wilson, T. D. (2016). Comment on “Estimating the reproducibility of psychological science.” Science, 351, 1037.
https://doi.org/10.1126/science.aad7243

Greenwald, A. G., Pratkanis, A. R., Leippe, M. R., & Baumgardner, M. H. (1986). Under what conditions does theory obstruct research progress? Psychological Review93, 216–229. https://doi.org/10.1037/0033-295X.93.2.216

Greenwald, A. G., & Schuh, E. S. (1994). An ethnic bias in scientific citations. European Journal of Social Psychology24, 623–639.
https://doi.org/10.1002/ejsp.2420240602

Hood, W. W., & Wilson, C. S. (2001). The literature of bibliometrics, scientometrics, and informetrics. Scientometrics52, 291–314.
https://doi.org/10.1023/A:1017919924342

Ioannidis, J. P. (1998). Effect of the statistical significance of results on the time to completion and publication of randomized efficacy trials. JAMA: The Journal of the American Medical Association, 279, 281–286.
https://doi.org/10.1001/jama.279.4.281

Ioannidis, J. P. (2005). Contradicted and initially stronger effects in highly cited clinical research. JAMA: The Journal of the American Medical Association, 294, 218–228.
https://doi.org/10.1001/jama.294.2.218

Lawson, A. E. (2004). The nature and development of scientific reasoning: A synthetic view. International Journal of Science and Mathematics Education, 2, 307–338.
https://doi.org/10.1007/s10763-004-3224-2

Liebling, B. A., & Shaver, P. (1973) Evaluation, self-awareness, and task performance. Journal of Experimental Social Psychology, 9, 297–306. http://doi.org/10.1016/0022-1031(73)90067-X

Martorell, B., Vocădlo, L., Brodholt, J., & Wood, I.G. (2013). Atypical combinations and scientific impact. Science342(6157), 466–468.
https://doi.org/10.1126/science.1243651

Nosek, B. (2015). Open Science Collaboration: Estimating the reproducibility of psychological science. Science349(6251), Article aac4716. https://doi.org/10.1126/science.aac4716

Schunn, C. D., & Anderson, J. R. (1999). The generality/specificity of expertise in scientific reasoning. Cognitive Science23, 337–370.
https://doi.org/10.1207/s15516709cog2303_3

Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359–1366.
https://doi.org/10.1177/0956797611417632

Wilson, M., & Moher, D. (2019). The changing landscape of journalology in medicine. Seminars in Nuclear Medicine, 49, 105–114. https://doi.org/10.1053/j.semnuclmed.2018.11.009


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.