Comments Sought on Changes To Merit Review at NSF

WASHINGTON, DC- On the heels of proposed changes to the peer review system at the National Institutes of Health (NIH) (see Sept. 1996 Observer), the National Science Foundation (NSF) also announced the introduction of new criteria to evaluate the thousands of grant proposals submitted to that agency each year. NSF’s effort is unrelated to NIH’s, but since 30,000 new grant proposals are submitted to NSF annually, the new evaluative criteria will affect many research grant applicants, above and beyond NIH. The scientific community is invited to comment on NSF’s proposed criteria by January 31.

To determine which grants get funded and which do not, NSF has relied on a process of external peer review based on the four criteria of researcher competence, scientific merit, utility/relevance, and effect on infrastructure. Virtually all new applications are reviewed by several individuals, generating some 170,000 external peer evaluations that help guide funding decisions. The ultimate funding decision rests with an NSF program officer who also use these criteria.

The new draft criteria streamline the four criteria down to two-research quality and likely impact. NSF hopes the new guidelines will make the task of reviewing grant proposals clearer for reviewers and that they will generate more consistent and meaningful reviews.

Report Released

The NSF Task Force on Merit Review was established in May 1996 and was instructed by the National Science Board (NSB) (the 24-member governing body of NSF) to examine the criteria used in external review, following the findings of an internal NSF staff task group that determined that the criteria are unevenly applied by reviewers and that they need to be clarified and rewritten.

In making its recommendation to explore options, NSF staff relied on the results of a 1991 survey of a cross-section of reviewers. The survey indicated that less than half of the respondents said they usually commented on all four current criteria in their reviews. In fact, only one of the criteria, researcher’s competence to perform the research, was used almost always by reviewers. But the scientific merit of the proposed research was addressed in only 80 percent of reviews. And, utility and relevance of the research were commented on by reviewers in only 40 percent of reviews. Infrastructure was included in reviews only about 33 percent of the time. Furthermore, a 1995 electronic survey examining reviewer responsiveness indicated that NSF program officers experience difficulty in obtaining useful input from reviewers with respect to the utility and infrastructure criteria.

In response, the Merit Review Task Force released a Discussion Report in November in which it proposed changes designed to address several problems uncovered in the current criteria: (I) The lack of clarity in some of the criteria encourages the use of “unwritten” criteria; (2) Reviewers and program officers do not apply the criteria uniformly; (3) Criteria do not facilitate the incorporation of non-research activities; (4) Criteria do not track well with the 1995 NSF Strategic Plan; and (5) There is considerable variation in the use of the criteria across NSF programs.

The Merit Review task force recommended to the NSB that the current four-part review criteria, originally adopted in 1981, “be simplified and that the language be harmonized with the [1994] NSF strategic plan.” In accordance, the task force issued the new two-part criteria to replace the currently used four-part criteria (see accompanying box on facing page).

In a Nutshell

“Our expectation is that the new criteria will provide better guidance to the program officer, encourage the reviewers to comment on more aspects of the proposal than they were doing under the older criteria,” commented NSF Director Neal Lane at a press briefing on the proposed changes. “It is not a matter of changing how the reviews are scored, and it certainly is not our intention to develop a numerical basis for the scoring,” Lane stated.

Are the criteria really new or just a reorganization of the old evaluative standards? In a nutshell, a primary reason seems to be to address concerns about ensuring consistency in how reviewers use current review criteria. At the same time, NSF wants to preserve the flexibility that reviewers and program officers have in evaluating proposals.

While NSF’s current criteria have been in use for 15 years, most agree that they remain an effective means for determining the optimal allocation of NSF’s valuable and increasingly scarce resources. But according to the NSF task force, “from time to time, it is nevertheless prudent to examine the review criteria in the spirit of improving an already outstanding system.”

Why Now?

A number of factors have converged at this time to cause the NSB to invoke an assessment of these long-standing review criteria. First, an NSF 1994 strategic plan established long-range goals and core strategies for NSF, and the revised criteria are designed to align with those goals and strategies. Second, studies suggested there is need for improvement in NSF’s system of merit review. Third, seminal events over these 15 years- notably the end of the Cold War and the rise of global economic competition–have altered the context for public support of research and education, according to NSF. Finally, NSF maintains that it is now more important than ever to understand the returns that society enjoys from NSF’s investments in research and education.

Flexibility Preserved

NSF maintains that continuing flexibility in a reviewer’s discretion in the application of the criteria “may be as important as the criteria themselves. Most reviewers will address only those elements of a proposal they feel competent to evaluate. And, NSF also does not pre-assign weights to the criteria; given the variation across NSF’s many different programs, any such one-size-fits-all approach would be counterproductive, states NSF documentation.

Furthermore, NSF will continue to employ special criteria when proposals are expected to respond to the specific objectives of certain programs and activities. Examples include teacher training projects and the development of large research. The task force’s discussion report indicates that with its release, NSF and the NSB intend to stimulate discussion within and outside NSF. It seeks input and comments from all interested persons, especially current and potential grant applicants and reviewers. The task force report claims that, among other advantages of the new criteria, the criteria will be clearer to evaluators and proposers by explicitly recognizing the importance of both intellectual quality and the broader impacts and that feedback will to proposers will be more informative as regards proposal decisions.

How Do I Comment?

The recommendations of the merit review task force are available on the world-wide web (at http://www.nsf.gov/nsf/homepage/proprev/meritcom.htm) where an automatic feedback form can be found. Or, comments can be sent via email to meritrev@nsf. gov. In the spring of 1997, the NSB will consider the task force’s recommendations, which will follow from an analysis of the public comments on the proposed changes, and if the Board approves the recommendations, they will go into effect over the course of a year.

Science Community Input Sought

In particular, NSF seeks feedback as to whether the proposed criteria are clear and whether they would be easier to use than current criteria. It also seeks to know whether the criteria are likely to elicit useful input and comments from reviewers and whether the proposed criteria would likely improve NSF’s ability to foster linkages, especially across disciplines, between research and education, and between academe and industry. The task force also seeks ideas on further improvements to the criteria.

Members of the NSB and NSF Staff Task Force on Merit Review include Warren M. Washington, Chair, (National Center for Atmospheric Research), Shirley M. Malcolm (American Association for the Advancement of Science), and NSF staff Mary E. Clutter, John B. Hunt, and Paul J. Herer. Stay tuned to the Observer for a report on the final decision of the NSB, which is expected this summer.

Current NSF Review Criteria Ask Reviewers to Determine:

(1) Research performer competence. This criterion relates to the capability of the investigators, the technical soundness of the proposed approach, and the adequacy of the institutional resources available.

(2) Intrinsic merit of the research. This criterion is used to assess the likelihood that the research will lead to new discoveries or fundamental advances within its field of science or engineering, or have substantial impact on progress in that field or in other science and engineering fields.

(3) Utility or relevance of the research. This criterion is used to assess the likelihood that the research can contribute to the achievement of a goal that is extrinsic or in addition to that of the research itself, and thereby serves as the basis for new or improved technology or assists in the solution of societal problems.

(4) Effect on the infrastructure of science and engineering. This criterion relates to the potential of the proposed research to contribute to better understanding or improvement of the quality, distribution, or effectiveness of the nation’s scientific and engineering research, education, and manpower base.

Newly Proposed NSF Review Criteria Ask Reviewers to Determine:

(1) The intellectual merit and quality of the proposed activity. The following are suggested questions to consider in assessing how well the proposal meets this criterion: What is the likelihood that the project will significantly advance the knowledge base within and/or across different fields? Does the proposed activity suggest and explore new lines of inquiry? To what degree does the proposer’s documented expertise and record of achievement increase the probability of success? Is the project conceptually well designed? Is the plan for organizing and managing the project credible and well conceived? And, is there sufficient access to resources?

(2) The broader impacts of the proposed activity. The following are suggested questions to consider in assessing how well the proposal meets this criterion: How well does the activity advance discovery and understanding while concurrently promoting teaching, training, and learning? Will it create/enhance facilities, instrumentation, information bases, networks, partnerships, and/or other infrastructure? How well does the activity broaden the diversity of participants? Does the activity enhance scientific and technological literacy? And, what is the potential impact on meeting societal needs?


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.