Academic Observer

Anonymity in Scientific Publishing

We are entering a new age of transparency and openness in science. New scientific practices that would have been unthinkable to most of us even a decade ago are now becoming commonplace. One of my recently completed projects was fully preregistered on the Open Science Framework website, complete with predictions, reasons for possible exclusion of data, the analytic techniques to be used, and so forth. Well, yes, I am fourth author on the project and one of my recent PhD students, Adam Putnam, did all the work, but I will still bask in being part of the new wave in science.

Even though I have not been at the forefront of writing about all the new practices in science, I followed along from my perch as chair of the APS Publications Committee. (I stepped from that position a year ago, once Advances in Methods and Practices in Psychological Science (AMPPS) had been established.) I was edified by the various articles and e-mails I received, and then by the collection of blog posts and tweets forwarded by others, about the pros and cons of the new practices. I think the concept of “open science” and its transparent practices have a strong toehold in our field, at least, and are gaining momentum in all of science. The Center for Open Science (and its Open Science Framework) is one of many exciting developments. Transparent practices seem here to stay.

With one glaring exception: Transparency in publication practices. Some journals, such as the Journal of Educational Psychology, have initiated a “masked review policy, which means that the identities of both authors and reviewers are masked. Authors should make every effort to see that the manuscript itself contains no clues to their identities” (from the website). Other journals do the same. This procedure can present a problem for those people with a sustained record of research on the topic of the manuscript. Do you leave out self-citations from the references? I have seen that happen with a citation of “Author, 2011,” but of course that can itself be a clue to identity. Also, this practice of masking the authors conflicts with the idea from the open science movement of posting one’s paper for comments (free reviews) on a website before submission to a journal. Other journals permit authors to submit anonymously but do not require it, and other models are possible. I am not sure if the practice of anonymous submission is increasing, and I cannot seem to find data on the issue.

Should Reviews Be Signed? What About Action Letters?

Once a paper arrives in the editor’s office, it is either triaged (see, especially, Psychological Science in our field) or sent out to review. Most reviewers choose to be anonymous. I don’t, and I know other cognitive psychologists who sign their reviews, too, but I have been told that the practice is rare in other disciplines.

Why did I change? I edited a journal in the 1980s and became used to signing my action letters, so I saw no reason to change that practice for reviewing. I thought, and still think, that signing encourages me to write more thoughtful and respectful reviews. Of course, the practice leaves me open to receiving critical responses from recipients of my reviews. A year ago, I reviewed a paper on an old issue in the psychology of memory that did not cite relevant research, so I took a few paragraphs to provide a tutorial review that I thought might be helpful. One of the authors wrote to me and the action editor to say that he found the tone of review offensive; in particular, he found my review “condescending.” I wrote back an apology and said I thought I was being “educational.” But I went back to my review and, sure enough, the reviewer had a point regarding the tone of the review. In my defense, I was annoyed at reviewing a paper on an issue (not even one that I studied) by authors who showed little appreciation of the literature. The hazard of signing reviews is having your reviews reviewed, but that’s fine with me. Transparency. Why snipe at others from behind a rock?

I recently was asked to serve as an editor for two papers for the Proceedings of the National Academy of Sciences (PNAS). Authors are identified to the editor when they submit papers. The editor-in-chief (or maybe a senior staff person) assigns it to a more specialized associate editor. If the paper is not triaged at these early stages (50% are), the associate editor asks someone more specialized (me, in this case) to serve as action editor for the paper. In the most recent case, I chose several reviewers, and rather quickly the reviews came back. PNAS does not permit identification of reviewers to authors, but they are put on a tight deadline — 10 days — for submission of reviews. I had read the paper, so when the reviews came in, I read them a couple of times, read the paper again, and wrote an action letter.

I asked to see how the eventual package looked when it was returned to the submitting author. I found what I had been told to expect: The entire set of information came from PNAS, but neither the reviewers nor I were identified. From the authors’ perspective, some shadowy presence emerging from PNAS had made pronouncements about the publishability of their paper. In my experience, this takes anonymity to a new level, but perhaps this practice is common in some fields of science. If the paper is eventually accepted and appears in PNAS, I will be identified in a footnote as the action editor who handled it.

I wondered why there has been so little discussion of anonymity in submission and reviewing in the new transparency movement, so I wrote to several friends who have been more deeply involved in the open science movement, and I asked them. Had I just missed the relevant articles? I was told that their entire community is having heated debates about the merits and demerits of transparency in submission and reviewing, but more on Twitter, blog posts, and the like that I don’t read. Let me consider some of the issues, even if briefly.

Anonymous Submissions

Concerning submissions, the argument is that anonymous submission (assuming it works) aids researchers who are starting out, who are not at the most well-known universities, who may be from another country, and so on. Making submissions anonymous may give such investigators a shot at a fairer process than they might otherwise receive. I think this is a reasonable argument, but there are counterarguments. For one, many reviewers really bend over backward to help young researchers or ones who are not native English speakers, especially if they see a reasonably good paper that needs some reshaping. If the reviewer does not know who submitted the paper, she or he might just write a short negative review without trying to be particularly helpful. Also, sometimes knowing the author might make a difference. Suppose a paper arrives in the editor’s inbox and its message is that several experiments have provided devastating rebuttal of Snerdley’s important theory of something-or-other that he has been pushing for years. It might be worth knowing if Snerdley, rather than Snerdley’s long-time critic, is the author.

Yet the bias can go in the other direction. A famous researcher may get a mediocre paper accepted simply based on reputation, as if the logic is, “Oh, it’s a paper by X, so it must be a good paper.” This may be less likely to occur with anonymous review — except that, of course, the editor knows who the author is and is the one making the decision about publishability. I have heard of cases in which, when a paper was triaged, the editor gets a note that essentially says, “Don’t you know who I am?” And the answer is yes, and I just desk-rejected your paper.

Another issue, raised by a commentator on this column, is that anonymous submission may encourage authors to submit essentially rough drafts of their paper, thinking, essentially, that the reviewers will not know who they are, so why go through those extra two revisions to comb out all those small problems? The reviewers will do that. That is not fair to reviewers or the editor.

At any rate, I can see the issue of anonymous submission either way. Pros and cons exist, and as usual it depends on how one weights them. Researchers can vote with their feet (as it were) by choosing to submit or not submit to journals requiring them to make their papers anonymous.

Signing Reviews

I used to encourage people to sign reviews, but after numerous discussions, I’ve backed off. Good counterarguments exist. Signing represents a danger to young scholars who might be advising rejection of a paper of someone senior who will later be asked to write a reference letter for the reviewer’s tenure case. Or that senior person may later be editor and get even when the young scholar submits a paper. (Yes, we would like to think these things do not happen, but we know better.) That problem exists at the senior level, too. I do think signing reviews makes the reviewer read more carefully, think harder, and be more civil. Yes, when reviewers sign, perhaps they become too polite. One problem noted by editors is that a reviewer will write a lukewarm-to-warm review, but then in the checklist of recommendations and the private note to the editor, will say the paper should definitely be rejected. This makes the editor look like a jerk for rejecting the paper over slightly positive reviews. I try never to do that in writing reviews, and I usually do not write private comments to the editor; my review says what the editor needs to know.  At any rate, I still always sign my reviews unless the journal prevents it, which some do. They take my name off, which is odd. One of my friends who also signs told me that he refuses to review any longer for a journal if they follow this practice.

In discussing the issue of signing reviews over the years, I have found some people who always sign, and some who at some point went from not signing to signing. However, I also discovered other people who used to sign reviews but now do not sign them, and they give good reasons. I have come to the conclusion that it is simply an individual choice. I wrote an earlier column about reviewing in which I provide 12 tips. Perhaps the most critical one is to have the goal of reviewing a paper using the same tone as if you were going to sign it and be identified. Also, never, ever choose to sign your positive reviews and not your negative ones!

The Editor’s Role

What about the editor? Is there any reason for an editor not to sign his or her name, other than not wanting to get pushback? Not that I know of. Psychological Science has begun the practice of putting the name of the action editor accepting the paper with the publication, which I think is a good practice. AMPPS will do the same. Other journals should follow suit, in my view. Some journals publish reviewers’ names, too, but that can be a fraught practice. If someone writes a negative review and the paper is accepted because of other positive reviews, the person’s name appears with the paper as if he or she endorsed it, too.

One interesting model comes from the BMJ, formerly the British Medical Journal, which has the most open publication practices I have found. Briefly, each article not triaged is considered by peer reviewers and several editors. Reviews are signed and are made public (with the authors’ responses to reviews) when the paper is published. All people are identified in the process (editors and reviewers are identified). This process takes transparency to a new level, one at the opposite end of the spectrum from PNAS.

The editor has a critical role in the whole process. The obvious part is that the editor makes the decision about publishability. The less obvious role is that the editor selects the reviewers. When I was associate editor and then editor of the Journal of Experimental Psychology: Learning, Memory and Cognition in the 1980s, I felt as if I could strongly bias the eventual decision on a paper just by selection of reviewers. Editors get to know that some reviewers dislike most every submission, and others have a positivity bias. Selection of fair reviewers is a critical step, and editors tell me that it is getting harder to get good reviewers (perhaps due to the proliferation of journals).

A Thought Experiment Realized

Years ago, around 1990, Endel Tulving and I were chatting in my office at Rice University, discussing the issue of anonymity in science, the desire to make scientific submission and review anonymous “for protection.” Endel proposed the thought experiment of having two types of journals. In the alternate universe of journals, authors would identify themselves to reviewers, reviewers would identify themselves to authors, and editors would of course identify themselves. These would be the set of journals for open, transparent editorial processes (although we may not have used those terms in 1990). He wondered if scientific progress might not be greater if we had this kind of transparency in science. The thought experiment was to set up journals of both types and see which one researchers would elect to use and which one would win in terms of people signing up for one or the other approach, for submissions, and for the discovery of new knowledge. But we agreed that time that we will never know the outcome.

Now I think we might. Journals in our field and across science are experimenting with various degrees of transparency in the editorial process. While consulting people in writing this column, I learned about various journals in numerous disciplines. On the one end, there is the BMJ model, though not yet employed by any psychology journals that I know of. (Collabra, the journal published by the Society for the Improvement of Psychological Science, has some of these features.) On the other end, there is the PNAS model. And we see (and will continue to see) journals experimenting with other kinds of practices, such as requiring that all submissions be vetted by being posted on a website. Some journals (as now) forbid it, whereas others might encourage it (even require it). In due course, over the decades, such experimentation may lead to new models of journal publishing. Which journals will receive the best submissions? What forms of publication will survive? I would like to bet on more open practices, but I am often wrong in my bets.

Comments

Reviewers should make a sharp distinction between papers with poor theoretical arguments or foundation, and papers with a good theoretical foundation, but one they do not endorse because they views things themselves differently. In the latter case they had better advise publication but write a letter to the editor or a second paper with arguments against the view unfolded in the just-published paper. The author will be happy with the attention and the reviewer can add a additional publication to his list of contributions to science.

After taking one of Roddy’s courses on academia in graduate school (based on his 2004 book, The compleat academic: A career guide), I started signing my reviews. As a graduate student and now an assistant professor, I feel that biased decisions as a result of transparency are worth the risk (to me individually, at least), and ideally tempered by the responsibilities of the editor. While a thought experiment in the 1990’s, I also feel that we as psychologists must play a critical role in research on biases related to publication. It is imperative that we conduct experimental research on the impact of gendered names, institutional affiliations, international status, etc., and set a model for transparent publication methods aligned with research from our own field.

One possible solution to the problem of junior identified reviewers receiving kickback from annoyed more-senior scientists is to have reviewing done by a team, rather than an individual. If a paper exists as a preprint, which by its nature is non-confidential, then the review can be done by a whole research group – great training exercise for the team, and it ensures a really good, thoughtful review. It’s harder to be angry with or dismissive of a whole group who had a collective criticism.

Hi Roddy –

As you know, I am the author you refer to in your article above who complained about your “condescending” review (in fact, I would go so far as to say it was potentially inflammatory in places, e.g, “[your theory] was dead on arrival when it was proposed”!). HOWEVER, I really appreciated that you signed your review, because your criticisms were much easier to swallow (I would have felt much worse if the sniper had stayed behind the rock!). Indeed, because I know you, I did not find your comments too inflammatory; I found them engaging and would love to explain why space limitations meant we couldn’t cite the papers you mentioned (citing more recent ones instead). Anyway, the point is that, as in real-life, arguments are much more fruitful when “face-to-face” than when disguised by anonymity (cf. internet trolling). I don’t think transparency does preclude frank exchanges, but I think it does encourage respect and reason.

Returning to your article: I accept that full transparency does carry risks, e.g., for junior researchers writing negative reviews. I liked Kate’s idea of diffusing the risk via group reviews; another idea (proposed by Niko Kriegeskorte I think) is to give scientists a unique ID code. They don’t have to reveal this code to others in the “real world”, but their code does have to be provided together with any review they write (and reviews are always published in this model). All reviews (and papers authored) by the same ID can then be linked, and readers can establish their own ratings of “reliability/trustworthiness” of an ID, in the same way that we currently use prior knowledge about the work of individual scientists (see https://www.frontiersin.org/articles/10.3389/fncom.2012.00094/full for further discussion). Though not fully transparent, this idea does offer some level of accountability for how scientists behave. And even if the real-world identity of IDs can be guessed, I think the small element of uncertainty will still reduce bias. (A problem for this model is how to use this information when real-world decisions are needed – e.g, whether to offer that junior researcher a job – perhaps this can be done by providing stats on an ID without revealing the ID itself?).

I don’t know whether scientists will ever be able to avoid taking criticism of their work personally, and to avoid retaliatory responses, but I agree with you that we are entering an interesting era of open science where scientists can “vote with their feet” in selecting the publication model they prefer (fully open or fully anonymous). Though quite not there yet, I know where I’m heading.

Rik

Anything that reduces what I perceive as the irrational, biased and unfair comments that I often see in reviews of my own papers is something I welcome, and think is good for the discipline. Does signing my own name make me consider more carefully what I say when writing a review? Yes.


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.