Presidential Column

Our Urban Legends: Publishing

The realization that writing these columns is expected from an APS President triggered fears I trace to a conversation with my fondly remembered Stanford colleague, Amos Tversky. About 30 years ago, Amos commented over a drink that for every 10 years of scholarly labors, one might possibly earn up to about 10 minutes of “pontificating time” (i.e., advice giving, ruminating about the good old days, bully pulpit sermonizing, etcetera). Needless to say, self-respecting academics who have earned those 10 minutes would rather tango in public than ever use them and risk sounding like Polonius giving advice to his son. I agreed with Amos, as usual.

My presidential columns phobia got worse when I saw how my predecessors in these columns had thoughtfully addressed every topic I might have had something to say about. But slinking away claiming a case of writing block or urging less phobic colleagues to write all the columns also doesn’t feel right. Struggling with this, I asked a friend for advice and he suggested Googling “urban legends.” I found the 25 Hottest Topics (e.g., love, dating, autos) but none, except perhaps “embarrassment and fears,” seemed to be quite relevant for this column. However, it did get me thinking that we have more than enough legends of our own in our academic lives.

The First Set of Columns

In this and in the next few columns, I want to consider unspoken as well as explicit professional legends, “understandings,” and assumptions regarding our diverse roles and selves. These include us as authors, journal readers, reviewers, and editors, research and grant proposal reviewers, grant-seekers, tenure-seekers and tenure-givers, researchers, teachers, and mentors.

At the top of my Hot List of our legends are the ones about what it takes to get published in different kinds of journals. Publishing legends influence many of us, and sometimes in ways that on, reflection, we may neither like, nor want, nor need. Some of them may even undermine efforts to build a better science. This first column addresses legends about publishing and implicit journal policies; the second one turns to journal reviewing. These legends in turn link closely to those about what it takes to get the grants that enable research, and beliefs about how grant reviewers and applicants behave in their roles. That’s the topic of the third column. A fourth column considers how these legends relate to those about academic career-building and getting tenure. My fifth column discusses some of the conditions and changes in practice and values that might facilitate the building of an increasingly cumulative psychological science, and perhaps help our urban legends keep pace with the rapid growth of our field.

Some of our urban legends, rooted in the past of our field, perhaps once served well but are no longer best for its present and future. As one colleague noted, urban legends or myths (e.g., that there are 18-foot alligators in the sewers of New York City) risk becoming self-fulfilling prophecies, and they become true to the extent that we let them be. My hope is to encourage discussion about some important legends (or in our jargon “implicit role expectations”) in our science to constructively rethink and even modify them, in light of the rapidly changing world of our science and profession. So in the op-ed spirit, here goes.

Implicit Journal Policies

As an author, advisor of students, reviewer, and editor, as well as 6 years of sitting (often painfully) on the Publication and Communication Board of APA watching new rules for semicolons being crafted into the next Publication Manual, I have been exposed to journal legends (and editorials) for half a century and still have not developed immunity. They convey messages, particularly to those still early in their careers, about our science and how to present research that will be welcomed into its well-guarded “highest impact” journal pages. In this column, I want to air some beliefs that might, perhaps unwittingly, influence our diverse roles when dealing with manuscripts intended for journal publication. I do so filled with caveats, recognizing that my sense of these implicit beliefs about journal policies is based on unscientific subjective impressions.

I ran into such implicit understandings firsthand when I was Psychological Review’s editor just preceding the present team. One unspoken understanding, reflected in the weight of my mail, was that getting into that journal’s well-guarded pages required 1) a manuscript that averages more than 80 pages, and sometimes near 200, often accompanied by an even heavier appendix, 2) a set of still unpublished (hence unreviewed or negatively reviewed) studies to “definitively test” the theory/idea that was being proposed. It turned out that, at least during my tenure, these ideas were misunderstandings and exactly what I was not looking for.

That experience sensitized me to the importance of implicit beliefs about journal policies, right or wrong, and the recognition that regardless of whether or not they match what the journal’s leadership at the moment is looking for, they have consequences that are worth considering. Take, for example, the implicit policy that, I think I am not alone in believing, has governed editorial decisions for many years in some of the field’s very best empirical journals: For a paper to have a chance to be accepted, it must have three to six or more experiments that definitively test rigorous, new, theory-derived predictions to solve a newsworthy major problem. I refer here to journals in the areas of social psychology and person-context interactions to which most of my experience connects, and I have little sense of the relevance of what follows for very different areas of our science. For ease of discussion, I’ll shorten this legend into the Newsworthy Definitive Solutions or NDS criterion. If the rumors often heard in cathartic conversations between the formal presentations at conventions are right, NDSs should be found in every issue of our best and most sought-after journals. Casual inspection suggests that many of the contributions contain from three to six studies and include interesting and often fascinating findings. But I must confess, it seems almost as difficult to detect NDSs in our journals within my areas as it was to uncover WMDs in Iraq.

Many questions come to mind when I think about the NDS criterion. It is reasonable for our best journals to want more than a little bit of new data from an empirical contribution, hence the norm is to expect at least a few studies on a question. It is the theory-derived definitive solution part of the criterion that is perplexing. Do we have many or any novel and important theory-derived predictions that can be definitively tested? Then why was it so hard for me to see them in those thousands of pages I looked at during my tenure as editor? And when and if they come along, can they be definitively tested by a few studies within the page limits of any of our empirical journals? Do we even want NDSs? Didn’t we once hear about how science is the business of trying to disconfirm and change your favorite theories and hypotheses and that doing so is a good thing? Or how theory building in psychology involves something Cronbach and Meehl called construct validity research, and that it takes a lot more than a handful of studies and yields a lot less than a definitive conclusion/solution with a QED at the end.

Fallout Risks

At this point, you might be saying, “All right, stop quibbling, maybe the NDS criterion (if it’s really there) should not be taken so literally.” The desire for NDS may just be a correlate of a journal’s level of excellence and success; the greater the competition for its limited pages, the more difficult the criteria for entry must become. Perhaps what the journal’s well-intentioned and hard-working guardians really mean is that for entry into its pages one should do a lot of work, cut into a bunch of little studies, and test something important and novel. And you should come out with a clear, newsworthy answer that’s as sound in its methods and conclusions as it can reasonably be at this point in time, so the article won’t embarrass the reviewers and editors.

This is fine, but given what we know about priming effects and the social influence process, I still worry about what happens if potential contestants for the journal space, being human, believe that it takes real NDSs to gain admission into a top flight journal. To meet that standard, might contestants become tempted to tweak their contributions to make them seem a bit more original and sprung entirely de novo from the author’s head? Make the titles even more embarrassingly cute? Make the studies seem more uniquely theory-driven, a priori theory-derived, and monumentally newsworthy? Might that inflate introductions and discussion sections? Could it generate even more tortured, possibly even less transparent, method sections? More over-simplified abstracts and bloated speculative discussions? Might it encourage segmented little studies and mini-experiments when one or two big ones (or small ones) might be better? Make one wish even more passionately that more psychology journals would follow the lead, reflected in the success of our Psychological Science journal, of other sciences insisting on short introductions; sparsely discussed, data-driven specific conclusions; and severe page constraints? If so, it might even let final reviews and final decisions about a paper’s fate come back without unbearable delays that can turn into years.

Obviously, implicit journal beliefs inadvertently influence research in ways we often don’t want, just as they made the submissions, in this past editor’s view, much too long and padded. One of the subtle, but particularly unfortunate, consequences of a perceived NDS culture is that it may undermine what many of us want to encourage most. The notion that a manuscript must unequivocally find the solution to a problem it poses may well hinder the development of a scientific community in which people work on common problems and in which the science becomes cumulative, building on each others’ work, rather than repackaging it and engaging in parallel play. As one young faculty member noted, “If I answer all the questions about my research program, then there is nothing else anybody can add to it, and I think this is why people reinvent the wheel over and over.”

We complain about these problems when we speak in our voices as reviewers, editors, and authors chatting at convention meetings or when running enraged into a colleague’s office. And since, regardless of our momentary roles and voices, we all share the desire to help make psychological science as good as possible, perhaps at least some of these concerns may be worth thinking and talking about not just when chatting and letting off steam with our friends. Maybe we could even spend some time on it at an APS convention inside a meeting room, and not just in the hallways? ♦


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.