Using an Automated Wizard to Process Minimal-Risk Research

Issues of mission creep and excess administrative burden are abundant in the area of human–subject research protections (e.g., Fost & Levine, 2007; Grady, 2010; Gunsalus et al., 2006, 2007; Joffe, 2012). Although there is no question that protecting research participants is essential, procedures for providing this assurance are often quite cumbersome and time-consuming but contribute little to the intended goal. This is especially problematic for the class of minimal-risk research that qualifies for exempt status on the basis of categories described in the Office of Human Research Protections (OHRP) regulations, wherein the cost of extensive review is far higher than the benefit given the low level of risk. The Notice of Proposed Rulemaking published in the Federal Register on September 8, 2015, stated that “a web-based decision tool … will provide determination of whether or not a study is exempt” (p. 53936). Although such a web-based tool, or “wizard,” was not mandated in the final rule (partly because of the lack of a widely used tool), the regulatory environment is now supportive of using this kind of wizard to determine which research proposals should be exempt.

In 2014, the Federal Demonstration Partnership, a cooperative initiative among 10 federal agencies and 154 institutional recipients of federal funds, launched a pilot study to provide a proof of concept that an automated wizard could allow investigators to accurately self-determine exempt status. The wizard was created on the basis of OHRP guidelines, adhering closely to decision flowcharts that the OHRP made available on its website. The pilot, which was completed in 2017, included 542 case studies from 10 volunteer universities. Each case study was processed using the wizard and was also independently reviewed by the university’s institutional review board (IRB). On average, investigators required less than 15 minutes to complete the wizard questions and receive a decision, suggesting the potential for a vast savings of investigator and staff time if the wizard’s decisions agreed with university IRB determinations.

The results of the pilot study (Schneider & McCutcheon, 2018) were informative and quite promising. Of the 264 studies that were fully processed through the wizard, 81% agreed with the determination of the IRB. A case-by-case review of these studies suggested that the agreement might have been even higher if it were not for institution-based criteria for stricter review than required (at least 10 cases) or misunderstandings by investigators regarding the OHRP exempt categories (up to 23 cases). With these adjustments, agreement might have been as high as 94%.

The wizard was built with a mechanism to identify cases that might not be amenable to automated review. Roughly 30% of the case studies were flagged by the wizard as involving potentially vulnerable populations (e.g., children or prisoners), possible conflicting researcher-participant relationships (e.g., instructor-student or provider-client), or other concerns that might require a more detailed review. Thus, the wizard can also serve as an effective screening tool to quickly and easily identify cases that may benefit from additional review. Another 120 cases were flagged because of anomalies suggesting that investigators were having difficulties in interpreting the OHRP definitions of “research” and “human subjects.” This suggests a pervasive need to clarify these nuanced definitions and develop more user-friendly explanations.

The review of pilot results also suggested a few areas for improvement. Most importantly, the wizard did not adequately screen for sensitive information (e.g., potential reports of criminal activity or substance abuse) when potentially identifiable data were being collected. The wizard also seemed best suited — and most commonly needed for — research using surveys, interviews, or questionnaires; research in classroom settings; and research using identified secondary data.

We have revised the wizard on the basis of these findings. First, we have moved to the beginning of the process any questions that would exclude a study from wizard review. In this way, investigators whose projects are not amenable for review by the current automated tool will spend very little time entering project information into the wizard. In a live demonstration at the May Federal Demonstration Partnership meeting, it took less than 4 minutes for the investigator to be notified that the project was not eligible for wizard review.

On the basis of on the original pilot, we have also expanded the exclusionary criteria to increase the efficiency and effectiveness of the wizard. The current version is limited to exemption categories for research using surveys, questionnaires, interviews, classroom-based research, and secondary-use data. This simplification also reduces the potential for investigator errors in recognizing the appropriate exempt category. The remaining exempt categories are being ruled out using exclusionary criteria. Over time, additional modifications may be created to accommodate some or all of the remaining exemption categories.

A second addition to exclusionary criteria involves the addition of questions to determine whether projects will contain potentially sensitive information when identifiers are collected. These include drug and alcohol use, explicit sexual behavior, mental illness, criminal behavior, immigration status, and personal financial information. These cases are being referred for additional IRB review.

Finally, we also added clarification of the definitions of “research” and “human subjects” on the basis of OHRP guidance to help investigators ensure that they provide accurate responses. This included the addition of contrasting explanations of terms associated with OHRP’s definitions of research and human subjects, such as observation versus intervention or interaction, identifiable versus private information, and information that is or is not about the person. We also added itemization of what is excluded from the definition of research, such as single case reports of individualized observations; data collection not aimed at generalizing knowledge; and use of public databases, death records, or unidentifiable preexisting data.

A demonstration comparing the current version of the wizard with human review is in progress. This conservative approach, excluding projects that might be better served by a human review, combined with the clarifying questions to reduce confusion about terminology, provides a solid platform for self-review. Moreover, the wizard’s tracking capabilities allow IRBs to monitor self-reviewed studies, providing a mechanism for oversight without requiring valuable reviewer time for minimum-risk reviews. We are optimistic that this will save large amounts of time for both investigators and IRB staff and board members. Thus, a wizard could provide a huge reduction in administrative burden and provide as much or more documentation of the protection of human subjects in minimal-risk research.


Fost, N., & Levine, R. J. (2007). The dysregulation of human subjects research. Journal of the American Medical Association, 298, 2196–2198.

Grady, C. D. (2010). Do IRBs protect human research participants? Journal of the American Medical Association, 304, 1122–1123.

Gunsalus, C. K., Bruner, E. M., Burbules, N.C ., Dash, L., Finkin, M., Goldberg, J. P., … , Aronson, D.  (2007). The Illinois White Paper—Improving the system for protecting human subjects: Counteracting IRB “Mission Creep.” Qualitative Inquiry, 13, 617–649,

Gunsalus, C. K., Bruner, E. M., Burbules, N. C., Dash, L., Finkin, M., Goldberg, J. P., … , Pratt, M. G. (2006). mission creep in the IRB world. Science, 312, 1441.

Joffe, S. (2012). Revolution or reform in human subjects research oversight. Journal of Law, Medicine, & Ethics, 40, 922-929.

Schneider, S. L. & McCutcheon, J. A. (2018). Proof of concept: Use of a wizard for self-determination of IRB exempt status. Washington, DC: Federal Demonstration Partnership. Retrieved from

Notice of proposed rulemaking, 80 Fed. Reg. 53933 (September 8, 2015). Retrieved from

APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.