Presidential Column

The Publication Arms Race

Lisa Feldman Barrett, APS President

Psychological science today is locked in an arms race: a heated competition for superiority and status. This competition is not fought with weapons, material wealth, or even truth. It’s fought in publications.

Published papers have always served two purposes in the economy of scientific inquiry: They convey knowledge, but they’re also the currency that buys you status and a successful scientific career. We are hired, paid, and promoted largely on the basis of the papers we publish — not just their content but also their quantity (Lawrence, 2007). Of course, there are other metrics for success —  grants, invited addresses and lectures, and so on — but publications are the primary currency of scientific distinction and standing.

Over the past several decades, the publication arms race has accelerated (Bornmann & Mutz, 2015). When I started my career almost 30 years ago, a few peer-reviewed publications could secure an academic job at a storied institution. Two or three peer-reviewed publications per year all but guaranteed tenure in the US and Canada (my colleagues tell me the situation was similar in Europe and Asia). Today, a tenure-worthy CV from 20 years ago might get you an assistant professorship (and at top institutions, it better include at least one publication in Science, Nature or maybe PNAS). A CV that used to get you a job now makes you competitive for a postdoctoral fellowship. Hell, some of my colleagues won’t even accept a student for graduate training without peer-reviewed publications in hand.

It’s long been known that incentive structures that favor quantity over quality, status over substance, are a risk to the progress and integrity of science (Edwards & Roy, 2017; Geman & Geman, 2016). Albert Einstein famously noted: “An academic career in which a person is forced to produce scientific writings in great amounts creates a danger of intellectual superficiality” (Isaacson, 2008, p. 79). Take a moment and consider your own publication record: How much of what you have published so far stands the test of time? How much truly chips away at psychology’s greatest mysteries and challenges?

Our incentive structure is also a risk to intellectual freedom (Barrett, 1998). We miss opportunities for discovery every time we’re pressured to conform to the prevailing wisdom rather than innovate and risk failure. Does the publication arms race create curious adventurers in the great unknown, or are we more like government contractors, racing to secure enough funding to support our laboratories?

These issues are deeply personal for me because my students, postdoctoral fellows, and younger colleagues face the same struggles and constraints. This is not the scientific culture I want to leave them with.

Before I make my next point, which focuses on the plight of young psychological scientists, I want it understood that I completely and unambiguously support efforts to tidy up our scientific practices. I am 100% in favor of having large, representative samples with sufficient power to test hypotheses. And I think published papers should include multiple studies (where possible) ­— preregistered of course — and those studies should replicate one another. These requirements are necessary for valid scientific practice. But these requirements, in the context of the publication arms race, further tighten the screws on our young scientists.

Consider the freshly minted assistant professor, building her first lab: To command the respect of her peers and eventually earn tenure, she must do more than resist the lure of p-hacking together a bunch of low-n-studies to produce valid scientific results. She is expected to publish five to 10 papers a year, each of which should contain several studies of large, samples from countries not considered Western, educated, industrialized, rich, and democratic (WEIRD). Imagine the number of experiments she’d have to run to achieve this outcome — not to mention the hours designing tasks, managing students and staff, analyzing data, and so on. Think about how hard it is to secure sufficient grant funding for even a couple of innovative studies, particularly for someone at the start of her career.

This catch-22 is enough to make young scientists leave the field before they even get started — which they do in increasing numbers (Gould, 2015; Anonymous Academic, 2018). It’s enough to make undergraduate students question whether they want to pursue a PhD in the first place — which they question with increasing frequency. To be honest, if I were a young scientist today, I would seriously consider investing my time and energy elsewhere.

The publication arms race is a result of cultural inheritance — scientists are trained with a set of norms, values, and practices that they pass on to their students, who then pass it on in turn (Smaldino & McElreath, 2016). A few bad apples may have deliberately published junk to get tenure, but most of the scientists who helped create the arms race did so unwittingly or unwillingly. What happens next is up to us. Sometimes we’re responsible for fixing things not because we’re to blame but because we’re the only ones who can.

How can we change the incentive structure in psychological science? It seems a task of Herculean proportions, with consequences that stretch beyond the internal workings of our science (i.e., how we hire, evaluate, promote, and fund ourselves) to the larger scientific landscape. We’ve faced onerous challenges before, most recently the replication crisis and its aftermath. Psychological science led the charge on that one, turning lemons into lemonade, creating what is now called the credibility revolution. And we can lead again. We are a hub science, after all, not just in content (Cacioppo, 2007), but also in process.

In the coming months, I’ll encourage the APS Board of Directors to take up these topics seriously. This means learning from those who study the process of science and the scientists themselves in the same way that we study the behavior of participants in our labs. It also means engaging with those who have creative ideas about how to reshape our incentive structure and mitigate the publication arms race. In the meantime, change can start with each individual.

Each of us, the next time we’re on a search or tenure-and- promotion committee, can commit to reading applicants papers instead of counting them. Each of us, when sitting down to write the next manuscript, or even better, to design the next set of experiments, can ask: Will this research contribute something of substance? Does it have a real possibility of moving psychological science forward or applying our science to help those in need? Each of us, when we encounter failure, can admit it freely, applauding our colleagues who do the same because being wrong sets the stage for important scientific discoveries (Firestein, 2012, 2015). And each of us, the next time we’re on a grant panel, can encourage research that has a high risk of failure but higher potential payoff: research of substantial creativity that seriously challenges the status quo. The future of psychological science depends on it.

References

Anonymous Academic (2018, February 16). Performance-driven culture is ruining scientific research. The Guardian. Retrieved from https://www.theguardian.com/higher-education-network/2018/feb/16/performance-driven-culture-is-ruining-scientific-research

Barrett, L. F. (1998). The future of emotion research. The Affect Scientist, 12, 6–8. Retrieved from https://www.affective-science.org/pubs/1998/future-of-emotion-research.pdf

Bornmann, L., & Mutz, R. (2015). Growth rates of modern science: A bibliometric analysis based on the number of publications and cited references. Journal of the Association for Information Science and Technology, 66, 2215–2222. https://doi.org/10.1002/asi.23329

Cacioppo, J. T. ( 2007). Psychology is a hub science. Observer, 20(8), 42. Retrieved from https://www.psychologicalscience.org/observer/psychology-is-a-hub-science

Edwards, M. A., & Roy, S. (2017). Academic research in the 21st century: Maintaining scientific integrity in a climate of perverse incentives and hypercompetition. Environmental Engineering Science, 34, 51–61. https://doi.org/10.1089/ees.2016.0223

Firestein, S. (2012). Ignorance: How it drives science. New York, NY: Oxford University Press.

Firestein, S. (2015). Failure: Why science is so successful. New York, NY: Oxford University Press.

Geman, D. & Geman, S. (2016). Science in the age of selfies. Proceedings of the National Academy of Sciences, 113, 9384–9387. https://doi.org/10.1073/pnas.1609793113

Gould, J. (2015). How to build a better PhD. Nature, 528, 22–25. https://doi.org/10.1038/528022a

Isaacson, W. (2008). Einstein: His life and universe. New York, NY: Simon and Schuster.

Lawrence, P. A. (2007). The mismeasurement of science. Current Biology, 17, R583–R585. https://doi.org/10.1016/j.cub.2007.06.014

Smaldino P. E., & McElreath, R. (2016). The natural selection of bad science. Royal Society Open Science, 3, Article 160384. https://doi.org/10.1098/rsos.160384

Comments

I recall when I first served on a tenure and promotion committee under the mistaken impression that I was indeed supposed to read the candidates’ submitted papers. I pointed out serious statistical errors in the papers of one candidate, papers on which other faculty members and students in the department were coauthors. No committee could counter my claims, but I was told such critiques were unwarranted, and that we should judge the paper based on the prestige of the journal or its impact factor, or the citation count for the article, or reputation of co-authors, or importance of subject matter, etc. but NOT on our own asssessment of the papers’ quality. And I was told NOT to mention my misgivings to the candidate. I think I would get the same reception now, in part because the other committee members would still consist mostly of persons with weak backgrounds in statistics even if they themselves used statistics routinely.

We still can make better use of citation metrics, tho they can’t say much about recently published papers. We should get rid of the nonsensical h-index. Consider two persons who over the same time period have published three papers each, author A’s receiving 4, 3, and 7 citations, respectively, and author B’s receiving 3, 250 and 1100 citations. Both will have an h-iindex of 3.

Surprisingly not advocated in an era when papers with 10-100 authors are not uncommon is adjusting an author’s citation count for a given paper to F/n, where F is the total number of citations for the paper and n is the number of authors on the paper. Then report both total counts and the adjusted total counts.

Poor training in statistics, incl. the poor, out-of-date quality of most statistics texts, is a core problem afflicting editors and referees and only slowly being remedied.

For example, there has never been a rational justification for fixing alpha for a statistical test and then describing a result as “significant” or “nonsignificant” according to the P value obtained.
Surprising how many editors still don’t understand that.

See:
Stuart H. Hurlbert, Richard A. Levine & Jessica Utts (2019) Coup de Grâce for a Tough Old Bull: “Statistically Significant” Expires,

The American Statistician, 73:sup1, 352-357, DOI: 10.1080/00031305.2018.1543616

Other more pragmatic ways to improve:

Always get a minimum of 3, perhaps 4, referee reports on each paper. Send to all referees the entire set of reports. Best way to educate the stable of referees on their own lack of knowledge, fallibility, inability to spot big problems. I did this whenever serving as an editor.

When serving as a referee and getting all the reviews, I occasionally — and insubordinately — wrote the authors (I always signed my own reviews) and critiqued the suggestions of other reviewers. A bit of a free for all but a salutary one!

Interesting theoretical paper about what appears to be an intractable problem in various areas of science
Notwithstanding, Lisa is correct and sounding the alarm may hopefully generate some rational discussion at universities and funding agencies.
My perspective comes from my years in academia as an NIH funded researcher. I have been retired for almost 10 years from a chemistry/biochemistry department that is not in the top 40 research universities.
Academia provided for me with a wonderful profession, to my mind, as long as I had funding for my ideas.
I was fortunate to be funded, even past formal retirement.
Very unfortunately, without the first grant and then continuous funding, I have seen that it can be a difficult profession and, of course, one that can end a promising academic career very quickly. After two, or maybe three unfunded periods, life becomes disheartening and confidence in your best ideas starts to erode.
University startup funds for incoming faculty, at least at our institution, always seemed to be insufficient and barely enough to get the program off the ground.
I’ve always thought that continuous funding provided a needed buffer to really explore what might ordinarily be a risky lead. When I had a temporary time of lost funding, I felt like I was doing what I call “economic” research – i. e., even though an approach to the problem was not the one I would use if I had funding, it was the only one I could afford at the time.
I would suggest that young investigators immediately quit going to the national professional meetings (like American Chemical Society, ACS in my case) and participate in highly specialized conferences (CSHL, Gordon &/or FASEB conferences). Don’t be shy – meet the best investigators in the field and openly discuss with them your ideas and show your best stuff for this little world to see and critique. The circuit is actually very small, but very powerful, especially if they see great promise in your efforts.
I think one of the problems in publishing, at least in chemistry, is that there are too many journals that accept “hot, incomplete” studies. We call this “cutting the balcony thin”. It makes for a more inclusive paper later so you can get a “ two for one” in this and many cases. The basis for these short papers is the so-called worry that you will be scooped on a big discovery. Very infrequent use of this argument has merit, but it is currently out of hand.
As the author points, for tenure, and even grant applications, one should be asked to choose the 3-5 publications that have had the greatest impact in your area and discard much of the others. Focus on how your work has addressed major scientific problems and advanced the area.
I am hopeful that Lisa’s article will brings light to the problem and open further thoughtful discussion that may lead to a more rational solution.

Dear Lisa,
Thank you so much for this column. I am a PhD student, new to research, but old enough that I want my remaining time on this planet to count for actual positive change. Not to churn out papers that don’t really impact anything apart from my ability to climb the academic career ladder, and my chances of getting funding from the major research funders.
While I love research, and I’m good at it, I’m seriously considering not entering the academic and university area at all, because of all the points you raised above.
I will be sharing your column with my university, in the hope somebody might start listening.

Congratulation for your Presidential column

Your concern in the area of ​​psychology is shared by many scientists around the world in all disciplines (for instance see DORA, Leiden manifesto). For quantitative evaluation there is no need for scientific committees. An accountant would do it with greater professionalism: it adds everything for every concept and a magic number appears that would order everyone, people and institutions. Einstein, to give an example, valued with such a criteria for when he published his first contributions (those by which is considered the brightest scientist of all time), one of the last places would correspond to him because he did not belong to any scientific institution, he had not won any subsidy or prize and for the last one nobody had quoted him. As we know, no method guarantees perfection but the system in place would leave the young Einstein among the last places in science… By considering originality and transcendence could be a starting point to recover Einstein and probably many other scientists to a more prestigious place

Important article, showing how the entire culture is essentially consumed with “likes,” in whatever form they manifest.

Another concern is how people get published in the first place. Unfortunately(and increasingly), many publications are more interested in headline or “groundbreaking” articles, rather than replications, which nobody does much of anyway. After all, who wants to plant the second flag on someone else’s mountain?

http://www.theinsanityhoax.com

I hope this means APS will look at its award structure, which seems to overwhelmingly favor publication (number and impact). Even the mentor award seems to focus on research-based institutions. The fact of the matter is most professional organizations, unless they are explicitly focused on teaching, tend to ignore the work that adjuncts, people at PUIs and/or underresourced universities, and others do that shapes our field in perhaps more meaningful ways than a couple of papers in Psychological Science. Further, it would be helpful for grant processes to be more open, and for people to actually get feedback regardless of whether the grant is awarded or rejected.

As a recent graduate, thank you for writing this article. Although, in addition to changes to the incentive structures for publications, there needs to be a shift to publish well-designed, albeit, null result studies as well, but not only for purely replication or in a pay-to-play journals. Even if these are only accessible in an online database it would be beneficial them to appear during a literature search and help with the file drawer problem for meta-analyses.
If 100 scientists run the same study and 5 find significant results and are published, is this an accurate finding? No, but future studies and future funding are continuously based on a select set of publications that may misrepresent the true effect.
Hopefully, there will be changes coming in my academic future, but unfortunately, I’m not overly optimistic.

Quite a few years ago I begged a former APA president to set up an APA searchable database for all null results. The studies would be submitted in a standard format that would include info necessary to be included in a meta-analysis, and checked briefly for sound methodology, as is done every year for thousands of posters submitted for the annual APA meeting. He thought the request made sense, but would not push it unless my division (Div. 5) was behind it. I could not drum up any enthusiasm among my quantitative colleagues, even though such a database could be very helpful to the community of psych researchers at large. Of course, the results in PSYCH_FILEDRAWER would not be taken as seriously as those published under strict peer-review, but all researchers would know that, and yet the aggregate information from the null studies could certainly be useful, with the appropriate caution applied. I am amazed that this idea has still not gained much traction, in spite of the “replicability crisis” in psychology. Until all studies are pre-registered, and perhaps even after, it makes sense to make some use of all the time, money, and effort that has gone into producing non-significant results each year.

This is an important matter to put on the table, and I admire the author for using her prestigious position to do so. For a long time I have been in favor of trying to “weigh” publications in addition to just counting them. We weigh them by asking what new knowledge has been added to the world by the work, and what are its implications? The prestige of the journal is likely a valid indicator of “weight” as are citations, but as we all know the validity coefficients are not spectacularly high. Outside reviewers (e.g., for tenure cases) all too often simply regurgitate what one has already learned from the CV–thus, we read that there are alpha publications and beta dollars in grant funding, which of course we already know. What we want to learn is what the reviewer thinks about the work and its importance.
When I was a fairly new dean of a big college I asked the department chair in a humanities department what their criteria for tenure were. He said, “A book or five articles.” I responded in a somewhat less snarky way than this shorthand version will sound: “That means the P&T committee in your department can be composed of a work-study student who can attest that the book actually exists and who can count to five. All the rest is just window dressing.” It took a while to work out how to add an assessment (imperfectly, of course) of that elusive construct, quality.
Again,number of publications, reputation of the journal, and amount of grant dollars are no doubt positively correlated with quality of work. But they are not sufficient as markers of “weight” and, after a certain minimum number (which will vary by sub-discipline), we should consider allowing junior faculty members (all of us, actually) to submit just that number of papers that he or she wishes to be judged by. Until we do something like that the arms race will continue.

A very valuable article. It’s very sad that, for many academics, no single employer (or funder) is fully responsible for providing a sensible full-time salary.
This makes the field very competitive and, as long as these institutions can incentivize people through course metrics such as publication count, poses a growing risk to quality scientific research.
I am impressed by the (perhaps uncomfortable) suggestion that fewer people with scientific training should continue to pursue a career in the field. We also need to break the unhelpful stereotype of the “lone genius” scientist and recognize that many research breakthroughs can only be made through healthy collaboration. Funders (speaking for myself) also need to recognize that competition increases quality only up to a point – but eventually promotes unhealthy behavior. There is no easy solution, but recognizing the problem is a good start.

An incisive critique of the cutthroat competition of “publish-or-perish” in latter-day academia, and valuable in that it exposes the problem ruthlessly. Comments above me showed that this article do resonate in many of us in the academic trade.

Incisive and psychologically cathartic though it is, I would like to say that like so many similar themed articles in the past, it barely scratched the problem. After a litany of grievances about the toils and pains that the current “arms race” created, what the author has to offer is a call for restraint: restraint in assessing applicants, restraint in designing research, and restraint in publishing papers. This purely intra-personal and moralistic adjuration may have some limited effects, this I have no doubt. Yet without delving into the deeper, hidden, and usually invisible structural causes of the academic arms race, the call for self-restraints could not change the general trend even an iota.

As a psychology lecturer in a Chinese university, I have read myriads of similar themed tirade against the quantitative assessment of faculty members in China. The situation in Chinese universities is similar (may be more severe) to what has been described here by the author, and each year, a new wave of criticism of current academic assessment methods would appear, to the extent that one could be driven into being cynic about everything academic. These articles I have seen back home are very similar with this one: how number cannot really say about the contribution of an author, how the quest for number suffocated real creativity. They are, while penetrating in pointing out problems, all relatively lame on the ways to solve them. Most calls for moralistic solutions like here, and some resorted to unrealistic solutions, such as the total abolition of the grants-papers-citations/impact factor system.

The space here precludes a thorough investigation about the academia, but I do believe that the real problems lie in the structural level, rather than the individual researchers’ ethics. The bureaucratic management of scientific endeavor (versus the patron system commonly seen before), the relationship between academia and society and government, and the hidden status/power hierarchy in the academia (the best way to accrue prestige without enhancing one’s own crafts is to entice more followers into this area and encouraging more publications citing ourselves) all has a share in the current plight we found ourselves in. A systemic change in academia is not easy, yet in the history of science, the systemic change do happen once in a while, often accompanying great historical events such as industrial revolution or world wars. Can we seize the next opportunity for a Great Realignment in the system?


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.