Presidential Column

Our Urban Legends: Grants

My first column on our “urban legends” discussed implicit understandings and misunderstandings about what it takes to get published in different kinds of psychology journals. My second column turned to legends about the policies and behavior of journal reviewers and editors, including a wish list of what they should not do (e.g., micro-managing other peoples’ research). This column discusses grant-giving and getting and the headache-producing decisions process that faces both peer reviewers and applicants in the competition for research support, so that ultimately something might be discovered that could be published. I have no solutions for the many thorny issues, but I do have strong opinions, as most readers do, and given the importance of research support for our science, the topic seems worth candid public discussion.  But be forewarned: In reading this column, the only firm conclusion you may reach is that there must be an easier way to make a living.

A New Grant World

It is old news that the grant world for psychological science in the United States has changed greatly in the last few years. On top of a general fiscal crisis for most federal funding, the structure for psychological science in America, especially at NIMH, has been transformed in response to two pressures. First, support at NIMH has faded, to put it mildly, even for work with serious claims of possible long-term translational (e.g., clinical, mental and physical health) implications, turning instead to research with tangible current translational applications. Second, advocacy by NIH’s more biologically oriented scientists to turn funding away from us and toward their basic (e.g., molecular level) directions, undermines, and maybe virtually defeats, support for much traditional experimental work (for example in social psychology), even if it happens to be brilliant.

But these pressures also open new routes that link work on basic psychological problems with new priorities, especially with developments in biological sciences. We see examples in the surge of research in brain imaging, the rapid growth of social and cognitive neuroscience, and the search for new links with work on genetics and epigenetics.  In last year’s Presidential columns, John Cacioppo discussed these new opportunities, and I will also address them later in this column. But first, I focus on grant-getting issues that have been around a long time and that have become only more timely as funding becomes more difficult.

Are We Too Tough on Each Other?

In eight years on federal study sections, and many more years in ad hoc roles reviewing research proposals seeking grant support, I was impressed by the finely honed critical skills of my psychology colleagues. Psychological scientists are well-trained to search out weaknesses, particularly in method and data analysis, and to find potential trouble even when it’s well disguised or beyond the applicant’s awareness. This invariably elicits admiring attention (and some fear) on study sections. Much of the time, this skill is one of the distinctive contributions of our field. But my sense is that psychologists seem more relentless in their search for method nits to pick in order to de-value other people’s research, particularly in their own specialty areas, than are  equally sophisticated researchers from other disciplines on the same panel when they deal with their own peers.

I even checked the numbers once and although the sampling was flawed, the N too small, and the computation was on a paper napkin, it looked like it might be a real difference.  Psychologists were the toughest, seeing more weaknesses in the proposals they reviewed in their areas and treating their brethren to worse scores than panelists from other fields and areas gave their colleagues. Yes, it could be that the work in psychology was weaker.  But it could also be that we are trained to focus more on what could be problematic than on what could be good. On the other hand, one also hears similar concerns from those in other sciences, so it may be that reviewers in every field are more sensitized to the possible flaws in the work of their peers.

It would be helpful to get the facts. So I raised these concerns, more than once and beginning many years ago when first serving on, and then chairing, an NIMH study section. It turns out that my perception was widely shared, though I’ve been advised (by Alan Kraut of APS) that some preliminary looks at this over the years have not found it.  Plus, some changes have been made that may help.  For some time now, before the scores from different study sections are put into any funding order they are “percentiled,” so that the top, say 10% compete with the top 10% of all other sections. And those percentiled scores are balanced within that same study section from its last year of ratings. These facts notwithstanding, I still worry that being too tough on our colleagues may support a vicious cycle in which years later we are told that there is less good work coming from psychological science: just look at the poor priority scores we gave it.

The Big Question: Evaluating Significance/Importance

Perhaps the critical focus on methods helps reviewers to avoid the toughest judgment: the importance of the work. Or does the judgment of perceived importance come first, and then the nit-picking and the search for “more concerns” justifies it? I have no clue. But of course it’s these importance judgments that are most difficult to make, and on which consensus is hardest to achieve, unless the applicant is Einstein or triaged away.

Thirty-five years ago, the Social Science Research Committee of NIMH investigated the conditions that affect research quality (Cartwright, 1973). Cartwright’s report, “Determinants of Scientific Progress,” could have been written today. It found, not surprisingly, that evaluation committees for research rarely have serious problems reaching consensus on questions like clarity of the objectives, methodological sophistication, feasibility of planned design, and qualifications of the researchers. The troubles start (and never stop) with judgments about the potential significance/importance of the work, on which the fate of the manuscript or proposal ultimately hangs. Cartwright points out that whenever questions are raised about the potential significance of the work, it implicitly entails an evaluation “…of the larger field of which the…work is but a small part” (American Psychologist, March, 1973, p. 222).  As Shakespeare said, “there’s the rub” — and as Cartwright said, that is relegated to intuition and subjective judgments. And that, as we all know, depends as much on the reviewer as on the reviewed.

Interdisciplinary Research Opportunities

The new grant culture in the United States closes many doors but it opens some new ones for interdisciplinary work by team of researchers addressing important problems that link to the brain and biological sciences. Interdisciplinary teams of researchers can work at different levels of analysis, including the biological, and can coordinate different laboratories and sites. Funding opportunities may increase for teams that link to relevant advances in neuroscience, biology, genetics, etc. (e.g., as in some current mind-brain-behavior investigations of executive functions). Their chances improve if they also have direct translational aspects. In such collaborations, psychology can function as a genuine hub. It also can open the way to funding for closely coordinated program projects or multiple grants by the team.  But a move toward big science requires a sharp shift away from the model of a single psychologist and his/her current students doing lots of different studies that has been traditional in our history and training.

Grantsmanship and Grantswomanship Legends: The Games We Play

There’s probably truth to the legend that for research to get funded the applicant has to already have done much (even most?) of the proposed work. Doing some of the research in advance of the submission may be a good way (and, outside of contract work, perhaps the only way) to provide the exquisitely detailed evidence needed to show skeptical reviewers that “preliminary studies indicate that these plans can’t fail” (ha, ha).  And then there is the belief that you practically have to write the pink sheet for the reviewer within your proposal and make a compelling case that yours is a “must fund” in order to have a chance to get some support before you decide to give up and develop more hobbies.

These legends raise other possibilities: For seasoned investigators seeking continuing support on closely related problems, it might make sense for reviewers to pay more attention to the current track record, the progress reports, and in-press work. One might require less detail and fewer proofs of continuing competence on matters where the researchers’ competence and the value of the research program has already been clearly established in current work. For new investigators, look for and support signs of promise, and be especially generous, offering more relatively easy and rapid funding to get start-up support for short, well-targeted proposals on promising questions.

Other legends say that regardless of the twisting and tweaking undergone in the revision process, once funded, researchers do what they think best at the time anyway (e.g., in light of what they discover as the work progresses), that their actions couldn’t have predicted at the beginning. And that’s what they usually do — if not, something strange is probably happening. If that’s the reality, maybe one should stop making requests for revisions to the default decision for everything that isn’t self-evidently absolutely great or terrible. In many cases, revisions might be mostly a time-consuming exercise, especially when specific improvements can be suggested within the initial review that reasonable investigators will take seriously. If additional information is needed to answer questions on which the decision would hang, a mechanism can be imagined (e.g., e-mail or a conference call) to let the questions be asked and answered without waiting for that next full round that’s costly for everybody.

Finally, let’s not forget that one good way to try to improve study section reviewer procedures is to serve on one.  And to stay on it a while before deciding to quit because there’s just too much to do to apply for your own next grant. ♦



APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.