Small Articles Fuel Big Debate

In the January 2012 issue of Perspectives on Psychological Science, two articles were published in which the authors argued that the trend of increasingly shorter journal articles could have a negative impact on research efforts. Two of the authors, Marco Bertamini and Marcus Munafò, reiterated their arguments in an editorial published in The New York Times on January 28. Their column is reprinted below along with a response from the current Editor and four former Editors of Psychological Science. We invite you to read their points and determine for yourself what “bite-sized” science means for psychological science.

This is a photo of the logo of the New York Times.

The Perils of Bite-Sized Science

In recent years, a trend has emerged in the behavioral sciences toward shorter and more rapidly published journal articles. These articles are often only a third the length of a standard paper, often describe only a single study and tend to include smaller data sets. Shorter formats are promoted by many journals, and limits on article length are stringent — in many cases as low as 2,000 words.

This shift is partly a result of the pressure that academics now feel to generate measurable output. According to the cold calculus of “publish or perish,” in which success is often gauged by counting citations, three short articles can be preferable to a single longer one.

But some researchers contend that the trend toward short articles is also better for science. Such “bite size” science, they argue, encourages results to be communicated faster, written more concisely and read by editors and researchers more easily, leading to a more lively exchange of ideas.

In a 2010 article, the psychologist Nick Haslam demonstrated empirically that, when adjusted for length, short articles are cited more frequently than other articles — that is, page for page, they get more bang for the buck. Professor Haslam concluded that short articles seem “more efficient in generating scientific influence” and suggested that journals might consider adopting short-article formats.

We believe, however, there are a number of serious problems with the short-article format.

First, we dispute the importance of Professor Haslam’s finding that short articles get more bang for the buck. Suppose that you conduct two studies, each offering evidence for the same conclusion, and you can opt to publish them either as one long article or as two short ones. Suppose that the scientists who will cite your studies will cite them in either format, either the long article or the pair of shorter articles. Based on citations, each of the three articles would have the same impact, but on a per-page measure, the shorter articles would be more “influential.” But this would reflect only how we measure impact, not a difference in actual substance or influence.

Second, we challenge the idea that shorter articles are easier and quicker to read. This is true enough if you consider a single article, but assuming that there is a fixed number of studies carried out, shorter articles simply mean more articles. And an increase in articles can create more work for editors, reviewers and, perhaps most important, anyone looking to fully research or understand a topic.

Third, we worry that shorter, single-study articles can be poor models of science. Replication is a cornerstone of the scientific method, and in longer papers that present multiple experiments confirming the same result, replication is manifestly on display; this is not always so with short articles. (Indeed the shorter format may discourage replication, since once a study is published its finding loses novelty.) Short articles are also more likely to suffer from “citation amnesia”: because an author has less space to discuss previous relevant work, he often doesn’t do so, which can give the impression that his own finding is more novel than it actually is.

Finally, as we discuss in detail in this month’s issue of the journal Perspectives on Psychological Science, we are troubled by the link between small study size and publication bias. Theoretically, if several small studies on a topic, each with its own small data set, are sent to publishers, the overall published results should be equivalent to the results of a single large study on that topic using a complete data set. But according to several “meta-studies” that have been conducted, this is often not the case: rather than the small studies’ converging on the same result as a large study when published, the small studies give a very different result.

The reason is that small studies generate a wide variety of results, and those studies that generate boring results or results contrary to what their authors predicted are either never submitted for publication or rejected. This doesn’t mean that the authors or the journal editors are being dishonest; it just means that they look for significant effects and give priority to novelty. Small studies are inherently unreliable — larger studies or, better still, multiple studies on the same topic, are more likely to give definitive, accurate results.

The rise of bite-size science is worrisome. We urge that editors demand more replication of unexpected findings and that the importance that the academic community gives to quantity of citations be balanced with a greater awareness of potential publication bias.

Until then, bite-size science will be hard to swallow.

Marco Bertamini, University of Liverpool

Marcus R. Munafò, University of Bristol

Editor’s Note: “The Perils of Bite-Sized Science” was reprinted from The New York Times, January, 28, © 2012 The New York Times All rights reserved. Used by permission and protected by the Copyright Laws of the United States. The printing, copying, redistribution, or retransmission of the Material without express written permission is prohibited.

Essential Findings Can Be Concise

Recently, Perspectives on Psychological Science published two critiques of short research reports, by Alison Ledgerwood and Jeffrey Sherman and by Marco Bertamini and Marcus Munafò (Vol. 7, No. 1, 2012). The criticisms were disseminated more widely by a blogger for the Chronicle of Higher Education (“Bite-Size Science, False Positives, and Citation Amnesia” by Tom Bartlett, January 3), and an opinion piece in the New York Times “Sunday Review” section (January 29).

Both articles castigated the short-report format of Psychological Science and other journals for promoting a variety of problems, including an overemphasis on eye-catching findings, selective reporting, and piecemeal publication without theoretical integration.

Science and Nature, the world’s two most prestigious and highly-read scientific journals, are exclusively devoted to brief reports of the latest advances in theory and research. We don’t hear many complaints about the articles published in those journals. Psychological Science was expressly modeled on them. In fact, for a time, the informal motto at our journal was “We publish the psychology that Science doesn’t.”

Frankly, we don’t find anything particularly eye-catching about most of the articles that appear in Science. What we do find is an awful lot of first-rate research, concisely reported, with the occasional blockbuster that decodes the human genome or announces a new human ancestor. We would remind critics of short reports that Einstein announced that E = mc2 in an article only three pages long, while Watson and Crick required just 842 words to describe the double-helix structure of DNA.

The critics admit that short articles are cited more frequently than long ones. The reason for this is not, as they suggest, that journals like ours encourage scientists to break their research up into the least publishable unit. The real reason is that the short-report format forces scientists to report only those experiments, and those results, that really matter and to eliminate studies and analyses that amount to little more than dotting is and crossing ts. Supplemental experiments, analyses, and references that flesh out the main material can be archived online.

The critics confuse the medium with the message, and small studies with short articles. Often, the essential findings of a study involving thousands of subjects can be reported in the same concise format as those of a perception experiment with just 20. It’s for those cases that journals like Psychological Science are intended.

Current and former Editors of Psychological Science

Eric Eich, University of British Columbia

Robert V. Kail, Purdue University

James E. Cutting, Cornell University

Sam Glucksberg, Princeton University

John F. Kihlstrom, University of California, Berkeley

Comments

I have two comments, one for the reprinted article and the other for the article by PS current and former editors.

The first article overlooks yet another reason why the higher per-page citation rate of shorter articles might be biased – the impact of self-citations. When you publish a two-experiment study, the second experiment necessarily cites the results in the first experiment, but these citations are not included in the citation indices. But if you publish experiment one first and experiment second, the latter can cite the former, and thus garner additional citations that would not occur otherwise. Given that a very large proportion of citations are in fact self citations (let’s be honest), this adds yet another unjustified differential between the two formats.

The second article refers to the short papers published in Science and Nature, and the specifically short nature of some breakthrough articles (Watson and Crick as well as Einstein). Yet these comparisons are very misleading. Theoretical physics, molecular biology, and other domains published in these and similar journals enjoy a high consensus on the key concepts, concepts that have been condensed into highly abstract terms (often mathematical). Psychology still lacks that consensus, and even basic ideas can have more than one operational definition. These disciplinary contrasts have been documented in empirical research (see my 2009 article in Perspectives on Psychological Science, 4, 441-452). Hence, articles in psychological science must be longer if concepts are to be defined to the same precision. Einstein did not have to explain what he meant by E, m, and c, especially since the paper “only three pages long” that contained this famous equation was building on a larger article published earlier that year in the same journal. To be specific, that truly epocal paper that made his equation possible was more than 10 times longer! We’re not there yet, unfortunately.


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.