NIH-Wide Policy Doubles Down on Scientific Rigor and Reproducibility

The US National Institutes of Health (NIH) is now assessing all research grant submissions based on the rigor and transparency of the proposed research plans.1 Previously, efforts to strengthen scientific practices had been undertaken by individual institutes, beginning in 2011 with the National Institute on Aging, which partnered with APS and the NIH Office of Behavioral and Social Science Research to begin a conversation about improving reproducibility across science.2 These early efforts were noted and encouraged by Congress. Now, the entire agency has committed to this important goal: NIH’s 2016–2020 strategic plan announces, “NIH will take the lead in promoting new approaches toward enhancing the rigor of experimental design, analysis, and reporting.”3

“This is another sign that increased attention toward rigor and transparency has become science-wide,” says APS Executive Director Sarah Brookhart. “Psychological science has pioneered the development of these practices and continues to be a model in promoting methods and incentives that encourage replication and open science.”

Emphasis on Design

The NIH policy highlights four areas central to enhancing rigor and transparency. The first area — attention to rigorous experimental design — may have the widest scope. According to NIH, scientific rigor is “the strict application of the scientific method to ensure robust and unbiased experimental design, methodology, analysis, interpretation, and reporting of results. This includes full transparency in reporting experimental details so that others may reproduce and extend the findings.”

NIH acknowledges that what constitutes robust and unbiased methods may vary from discipline to discipline.

“It is important to keep in mind that each scientific field may have its own set of best practices or standards to achieve scientific rigor,” wrote Michael Lauer, NIH Deputy Director for Extramural Research, in a blog post.4

Given these differences, a practical issue in the months to come will be the interpretation of NIH’s new policies and how they will or should guide new regulations. Researchers will have to stay tuned regarding what these policies mean for their own work. In the meantime, psychological scientists may wish to consult the NIH site “Principles and Guidelines for Reporting Preclinical Research” for guidance (in this context, “preclinical” means something roughly similar to “basic,” describing the kind of research that many psychological scientists conduct).5

The guidelines are comparable to those that authors encounter when preparing submissions to an APS journal. They recommend full reporting of statistical analyses using up-to-date methods, appropriate consideration of good experimental techniques such as randomization and blinding, and inclusion of details about how sample size was determined. Each of NIH’s institutes and centers have renewed their emphasis on these matters in different ways. For instance, the National Institute of Mental Health (NIMH) released specifications in a document titled “Enhancing the Reliability of NIMH-Supported Research through Rigorous Study Design and Reporting.”6

The NIH guidelines also recommend that data and materials be made publicly available online. This coincides with APS’s Open Practice Badges program, which recognizes journal authors who make their data or materials available online with an icon that appears on the published paper. (In case you missed it, a recent analysis showed that this program dramatically increased rates of data sharing.)7

Another aspect of experimental design now under scrutiny: NIH expects that researchers consider relevant biological variables such as sex when conducting research. The main idea here is that consideration of sex may be critical to the interpretation, validation, and generalizability of research findings. For instance, a study that is conducted on only male human subjects (historically a common practice in animal research) may be limited in generalizability. NIH also recommends consideration of other factors such as age, weight, and underlying health conditions.

(Learn more about NIH’s interest in sex as a biological variable on the website of NIH’s Office of Research on Women’s Health, which led the NIH-wide focus on this issue.)8

NIH also specifies that researchers should authenticate key biological and chemical resources when conducting proposed research. This focus comes from notable cases in which researchers believed they were using a particular resource (e.g., chemical compound, strain of mouse, etc.) but actually weren’t. This is relevant for many psychological scientists working in a variety of areas, but even those who aren’t frequent users of biological or chemical resources still can take a lesson: Consider carefully the different manipulations and methods used in a study and ensure that what’s being measured is what is intended.

Solid Grounding

NIH’s final point is that the scientific premise forming the basis of research should be sound. This involves the question, “Does the proposed research build on research that you already have reason to believe is rigorous and transparent?” If your research proposal is based on previous research that used improper or unconvincing research practices, this constitutes a questionable foundation and increases the potential for spurious results.

Given that NIH is concerned about rigor and transparency, what can researchers expect when submitting grant applications? Changes have been made throughout the grant application process. The Significance and Approach sections of the Research Strategy portion of applications now ask applicants to detail the scientific premise of the project and describe how the methods proposed will achieve robust and unbiased results. This same section will ask applicants to explain how sex is factored into the research design.

In the grant review process, reviewers will be asked to indicate whether there is a strong scientific premise for the proposed research and whether the investigators have presented strategies to ensure a robust and unbiased approach. And once an application is funded, subsequent progress reports will require investigators to document the rigor of the approaches taken to ensure accurate, reliable results.

According to NIH, the increased emphasis on rigor and transparency reflects NIH’s mission to promote the highest level of scientific integrity, public accountability, and social responsibility in the conduct of science. To further this mission, NIH also has announced that in 2017 it will begin evaluating institutional training grants, institutional career development awards, and individual fellowships using similar criteria.9 After all, a large part of improving research practices lies in training of early-career researchers.

A Renewed Focus on Replicability

With these changes, NIH joins other organizations in leading a drive toward improved replicability in scientific research. APS has helped focus attention on these issues in psychological science and beyond — for more, read “APS and Open Science: Music to Our Ears” by APS Executive Director Emeritus Alan G. Kraut.10 In addition, the Social, Behavioral, and Economic Sciences division of the National Science Foundation published a report in 2015 on encouraging robust, reliable science. This report was coauthored by APS Past President John T. Cacioppo, APS Fellow Jon A. Krosnick, and others.11

Another initiative from APS is the Registered Replication Report, a type of study developed by past Editor in Chief of Perspectives on Psychological Science Barbara A. Spellman and Special Associate Editors Alex O. Holcombe and Daniel J. Simons.12 These reports are multilab replication attempts of important experiments in psychological science, often paired with comments by the authors of the original studies.

Also supporting the important goals of rigor and reproducibility are APS’s journal policies. Past Editor in Chief of Psychological Science Eric Eich, in 2014, established a new set of guidelines to ensure that scientific claims made by authors were justified by the methods used.13 And current Editor in Chief D. Stephen Lindsay has taken new steps to further strengthen scientific practices at that journal, such as building a team of statistical advisors to provide additional statistical and methodological expertise in cases where it is necessary.

“I want to shout from the rooftops that Psychological Science is committed to scientific rigor,” said Lindsay in an interview.14

More recently, incoming editor of Clinical Psychological Science Scott O. Lilienfeld has affirmed that journal’s commitment to robust scientific practices. “I perceive the fact that psychological science is striving to improve itself by using the very methodological tools that psychological science has helped to create as a most welcome development.”15

In line with this commitment, Clinical Psychological Science has recently begun awarding Open Practice badges to recognize authors for making their data or materials open or preregistering their research.16

Additionally, the efforts of the Center for Open Science (COS), cofounded and directed by APS Fellow Brian A. Nosek, have been instrumental in building an infrastructure to support reproducible science.17 COS’s Open Science Framework provides storage for data, materials, and registrations of experiments. COS also has helped develop the Transparency and Openness Promotion Guidelines, of which APS is an original signatory.18 These guidelines outline ways that academic journals can encourage adherence to good research practices.

Even the US Congress has taken an interest in this critical topic, recognizing the importance of rigor and reproducibility in scientific research. In 2012, the Senate Appropriations subcommittee that funds NIH noted, “The Committee supports NIH’s effort to develop a consensus on the issues of false-positive research results.”19 More recently, Congressional language has observed, “The gold standard of good science is the ability of a researcher or research lab to reproduce a published method and finding.”20 And this past July, Congress noted that it expects an update on progress made within the scientific community on reproducibility issues in 2017 and beyond.21

NIH’s notice and new requirements suggest an extra level of scrutiny will be paid to scientific methodology moving forward.

Further Reading

“Rigor and Reproducibility”

“Rigor and Reproducibility in NIH Applications Resource Chart”


1    “Implementing Rigor and Transparency in NIH & AHRQ Research Grant Applications”

2    “A Science We Can Believe In”

3    “NIH-Wide Strategic Plan”; “NIH unveils FY2016-2020 Strategic Plan”

4    “Scientific Rigor in NIH Grant Applications”

5    “Principles and Guidelines for Reporting Preclinical Research”

6    “Enhancing the Reliability of NIMH-Supported Research through Rigorous Study Design and Reporting”

7    “Psychological Science Badge Program Encourages Open Practices, Study Shows”

8    “Considering Sex as a Biological Variable: In the NIH Guide”

9    Advanced Notice of Coming Requirements for … NIH and AHRQ Institutional Training Grants, Institutional Career Development Awards, and Individual Fellowships”

10   “APS and Open Science: Music to our Ears”

11   “Social, Behavioral, and Economic Sciences Perspectives on Robust and Reliable Science”

12   “An Introduction to Registered Replication Reports at Perspectives on Psychological Science”

13   “Business Not as Usual”

14   “Lindsay Talks Plans for Psychological Science”

15   “Lilienfeld Plans New Features for Clinical Psychological Science”

16   “Clinical Psychological Science Begins Awarding Open Practices Badges”

17   “Center for Open Science”

18   “The Transparency and Openness Promotion Guidelines”

19   “Departments of Labor, Health and Human Services, and Education, and Related Agencies Appropriation Bill, 2013”

20   “America COMPETES Reauthorization Act of 2015”

21    “Departments of Labor, Health and Human Services, and Education, and Related Agencies Appropriations Bill, 2017”