Where Was it Published?

Some time ago, I was part of a selection-committee meeting regarding a potential hire. When the discussion turned to a particular article by the candidate, a member of the committee asked, “Where was it published?” Nothing wrong with that question – except I had the feeling that the individual had not read the article and possibly had no intention of doing so. Knowing where the article was published was somehow supposed to serve as a proxy for reading and evaluating it.

If this were an isolated incident, it would be no big deal. But over the course of a career, more or less the same question has come up again and again, repeated in various surface-structural forms. I’ve probably asked it myself. Where the question becomes a problem is when hiring, promotion, and retention decisions are made more on the basis of where the individual publishes than on the basis of what he or she publishes.

As academics, we often speak of the dangers of search and promotion committees counting the quantity at the expense of evaluating the quality of publications. We speak less often of the tyranny of evaluating articles on the basis of where they were published – of making decisions based not on what was written, but rather, on where it was published.

Some of my colleagues might believe that such a system makes perfect sense. Journals, like most things in academia, have a more or less established pecking order. There is often, although certainly not always, some validity to the placements of journals in this pecking order. All things being equal, publications in the more prestigious journals will have been more “rigorously” refereed than publications in the less prestigious journals. The problem is that, sometimes, all things are not equal.

Although impact may be correlated with where work is published, the correlation is far less than perfect. And publishing in the “right” journals is neither a necessary nor sufficient condition for impact (Sternberg & Gordeeva, 1996). Certainly where things are published counts for something. But there are seven reasons why it is a mistake, I believe, to give too much weight to the “where” rather than the “what” of the publications:

  1. Counting more where something is published than what it is. The main risk, of course, is that we become preoccupied with where things are published rather than with what it is that actually is published. It is much easier to count publications or evaluate where they are published than to read them, but such practices are no substitute for critical evaluations of the actual work.
  2. Conservatism of more prestigious journals. More prestigious journals often are, in my experience, more conservative. (I say this having been editor of two journals, associate editor of another two, and consulting editor of countless journals, ranging from very high in prestige to much less so.) In a sense, the more prestigious journals have to be somewhat conservative. The more “rigorous” the reviewing, the more the reviewer is likely to try to ascertain whether the article conforms to conventional standards. But these conventional standards may or may not serve in specific cases. Indeed, when I was president of the Society for General Psychology, I founded a journal, Review of General Psychology (for which Peter Salovey then became founding editor), because of my belief that some leading theory journals might be more conservative than is ideal.
  3. Risks to interdisciplinary research. There are many prestigious journals in traditionally defined fields, such as social psychology, clinical psychology, or cognitive psychology. It may be harder to find journals of equal prestige in research that crosses traditional field-based boundaries, such as research on creativity, wisdom, morality, sense of humor, or other similar multidisciplinary topics. Of course, mainline journals sometimes do accept articles in such areas. But it may be harder to get the articles published if they do not quite fit the mission of any of the traditional journals. Indeed, referees for journals often are asked to comment on how appropriate the refereed articles are for the scope of the journal. Articles that cross boundaries may seem less appropriate for journals in a particular field, and hence would be less likely to be accepted.
  4. Risks to nonparadigmatic research. Research that does not fit into current substantive or methodological paradigms may encounter difficulty being accepted by the more prestigious journals, simply because referees may be less likely to recognize the value of the work, the more it departs from customary ways of seeing or doing things (Sternberg, 1999). Researchers may become reluctant to go outside the established paradigms for fear (which may, unfortunately, be justified) that it will harm their careers.
  5. Risks to books and other non-journal forms of publication. In England, I’m told, some researchers are becoming reluctant to write books because books do not count very much in governmentally-based departmental evaluations there, where numbers and places of publication are stressed. Here in the United States, I received in connection with an ongoing grant a list of journals that would be considered preferable outlets for publications of work emanating from the grants. The message of the granting agency was that publications in these officially-sanctioned outlets would count more than publications in other outlets.
  6. Self-fulfilling prophecies. We may become smug that articles in more prestigious journals are cited more often, without fully realizing the extent to which articles may be more cited not because of what they say, but because of the alleged authority of where they are published.
  7. Matthew effects. The “Matthew effect” (based on a prophecy from the biblical Book of Matthew) refers to the notion that the rich tend to get richer and the poor tend to get poorer. Journals that rank lower in prestige may have difficulty improving because people are afraid to submit to them, whereas journals that rank higher in prestige may coast, knowing that they will continue to get many high-quality submissions, sometimes almost without regard to what they do. Such high-prestige journals may end up defining what is “worth doing,” whether it is really worth doing or not.

There is nothing wrong with evaluating where a candidate has published, among other factors, in making hiring and promotion decisions. Most of us, of course, will continue to try to publish in higher rather than lower prestige journals. A problem arises, however, when place of publication is used as a substitute for carefully informed judgments regarding the impact, creativity, sophistication, rigor, or other aspects of research, or of the person doing the research. It is time to assess the extent to which this phenomenon occurs, and develop an awareness of the need to properly balance consideration of where research is published against other aspects of an individual’s scientific contributions and achievements.

The place of publication is not a valid proxy for the quality and impact of the research. In the limiting case, the place of publication may become more important than the substance of the publication. When this happens, we all will be in trouble, and so will the field of psychology as a science.


Sternberg, R. J. (1999). A propulsion model of types of creative contributions.
Review of General Psychology, 3, 83-100.
Sternberg, R. J., & Gordeeva, T. (1996). The anatomy of impact: What makes an
article influential? Psychological Science, 8, 69-75.

Leave a Comment

Your email address will not be published.
In the interest of transparency, we do not accept anonymous comments.
Required fields are marked*