Letter/Observer Forum

Letters

IRBs Must Understand Psychological Science
In a recent letter to the Observer (“IRBs: Ethics, Yes – Epistemology, No,” Observer, February 2002), John Furedy states that Institutional Review Boards (IRBs) have the right to consider the ethical issues of proposed research, “how subjects are treated,” but they do not have the right to consider epistemological issues, “how well designed the study is.” These two issues are not separable.

One of the responsibilities assigned to IRBs, as outlined in the Declaration of Helsinki, is to ensure that the potential benefits from proposed research outweigh the costs associated with research participation. Reducing the participation risks helps to achieve this goal, but IRBs are obligated to look at the potential benefits of research as well. Studies that are poorly designed or examine trivial issues offer few benefits, and cannot be approved unless there are no associated risks. Thus, IRBs cannot fulfill their mandated responsibilities unless they consider epistemology.

Furedy suggests that epistemological judgments are best deferred “to grant proposal-evaluating committees and to editors of … high-quality journals.” This suggestion is untenable, for a variety of reasons. Much research in psychology – perhaps even the vast majority – is not funded. Journal editors and referees cannot act as IRB reviewers, because IRB scrutiny must be applied before a study is conducted, not afterwards. Finally, at many institutions, IRB approval results in an extension of liability coverage to the researcher(s) conducting the approved study. Few institutions would be willing to permit outside agencies to make judgments that ultimately affect their own liability.

Furedy is correct in noting that IRBs were originally conceptualized for medical studies; documents such as the Declaration of Helsinki are clearly written for physicians. However, the solution to this problem is not to remove psychological studies from epistemological oversight by IRBs. A more appropriate response is to ensure that IRBs have an understanding of psychological science and how it is conducted. My university, for example, has several different IRBs, each charged with reviewing different types of research proposals. The IRB that reviews psychological research is composed almost entirely of faculty with PhDs in psychology.

– Harold Stanislaw
California State University

A Contradiction in Civility or a Missed Point?
One thing I noticed in the letter that William Vaughan, Jr. sent to the APS Observer (“A Contradiction in Civility?”) in response to Robert Sternberg’s recent article (“On Civility in Reviewing”) was that he appears to have missed the major point of anonymous savagery in reviews of unpublished works. Just before Sternberg’s article came out, part of a response I felt compelled to write to a reviewer who crossed the line of civility was that ‘anonymity is not a license for rudeness.’ I, like most of the readers who responded to the article, think Sternberg’s statements were absolutely correct.

On the other hand, when you write a criticism of a published work in a published book review, your comments are not anonymous – they are part of the public record, with your name right on them. This is a completely different situation, despite the assertion by Vaughan to the contrary. Whether you agree or disagree with his review, Sternberg was not hiding behind the cloak of anonymity when he published his review. I see no contradiction.

– James R. Lewis
IBM

Limitations of IRB Expertise
Hansen Responds to Furedy’s Letter

I would like to make a few comments in response to John Furedy’s recent article (“IRBs: Ethics, Yes-Epistemology, No,” Observer, February 2002). In it, he disagrees with my position that an IRB needs to include evaluation of the experimental design in their review of ethical issues (“Regulatory Changes Affecting IRBs and Researchers,” Observer, September 2001). Instead, Dr. Furedy argues that IRBs should concentrate on evaluating ethical issues of treatments and leave evaluation of research design to journal editors and review committees from granting agencies. He cites his own MA and PhD research as examples of studies that would have been too cutting-edge (or, perhaps, the fundamentals of the research area would have been known to too few others) for an IRB to competently evaluate his research design.

Certainly, he makes an interesting point. I am familiar with Dr. Furedy’s research program from the time he was a guest lecturer at an NSF-sponsored summer program in psychophysiology that I attended in 1987. And, I can only say that I agree completely that there might be instances (like the cases he mentioned) in which an IRB, no matter how diverse its membership, would lack the expertise to review the quality of an experimental design. Obviously, this scenario is possible (and, perhaps, more likely in institutions whose researchers are as innovative as John Furedy) but, nationwide, lack of IRB expertise to evaluate psychological research designs is probably not a major problem. It is hard to imagine an IRB that could function without one or more members who are generally skilled in evaluating methodology. Even so, when and if RCR (Responsible Conduct for Research) guidelines are implemented nationally, training of IRB members (as well as investigators) in research design skills will become a requirement for reviewing (and conducting) research that comes within IRB purview.

An analysis of the risks and benefits of a proposed research study is one of the primary duties of an IRB, and an analysis of the benefits of proposed research includes an evaluation of the research design (National Bioethics Advisory Committee, 2001; OPRR, 1993). The more potential risk there is to participants, the more the research design comes under scrutiny by an IRB. For minimal risk studies that are typical of much of (but certainly not all) psychological research, an IRB is really looking to see whether the study is designed well-enough that its findings will make a contribution to scientific understanding. This is an important ethical premise for involving human beings (as well as animals) in research. If the research design contains obvious flaws that would make the results suspect or unreliable, it is certainly not ethical to subject participants to risk, and it is probably not ethical to inconvenience them by asking them to participate.

The level of potential risks to participants is always going to be an important factor in determining the degree to which a research design is examined by an IRB. Having said that, let me attempt to clarify how I believe most IRBs would (and should) handle the general problem of reviewing proposals whose science is beyond the expertise of its members (Levine, 1986). Clearly, understanding the limits of their own expertise is critical to performing the roles of an IRB. On the one hand, when reviewing cases in which a proposed project with esoteric methods is being peer-reviewed by a granting agency (and the research is not expected to take place without agency funding), an IRB would feel less responsibility to arbitrate the science beyond looking for obvious methodological or procedural flaws that might put subjects at risk. In these cases, the IRB would probably be more willing to accept the evaluation of expert review panels if the reviews are made available to the IRB. As Dr. Furedy also pointed out using his own experiences, however, this strategy is not always foolproof. On the other hand, for proposed projects that do not undergo this type of rigorous external review, the IRB has the responsibility to perform this activity, but they may have to bring in outside consultants in order to do this. Furthermore, the IRB may need to ask for additional information and references from investigators to clarify the research problem, so that they and their consultants can make a reasonable judgment about the quality of the methodology. As noted in Protecting Human Research Subjects: Institutional Review Board Guidebook (OPRR, 1993, 4-1), “they [IRBs] should understand the basic features of experimental design, and they should not hesitate to consult experts when aspects of research design seem to pose a significant problem.” Chapter 4 of the IRB guidebook contains a review of basic experimental and non-experimental designs.

– Christine Hansen
Oakland University

REFERENCES
Levine, R. J. (1986). Ethics and regulation of clinical research (2nd Ed.). Baltimore: Urban and Schwarzenberg.
National Bioethics Advisory Commission. Ethical and policy issues in research involving human participants (Vol. 1). Bethesda, Maryland, August 2001.
OPRR/NIH (1993). Protecting human research subjects: Institutional review board guidebook. Washington DC: U.S. Government Printing Office.


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.