One undesirable side effect of the mental hygiene movement and the overall tradition of dynamic psychiatry has been the development among educated persons of what I call the “spun-glass theory of the mind.” This is the doctrine that the human organism, adult or child, is constituted of such frail material, is of such exquisite psychological delicacy, that rather minor, garden-variety frustrations, deprivations, criticisms, rejections, or failure experiences are likely to result in major traumas.
As young psychologists in our first tenure-track positions, we were eager to begin our research careers. While we both imagined that there would be hurdles along the way, neither of us imagined that getting our research approved through our local institutional review board would be one of the biggest and most persistent. What we quickly learned, however, was that our IRB was gravely concerned that our undergraduates would experience harm or extreme distress as a result of participating in our research. Our work did focus on sexual behavior, including traumatic events such as date rape, yet our questionnaires were commonly used by other researchers, and many were “gold standard” measures in our field, validated by numerous studies over many years. When our IRB balked at our use of the questionnaires with our undergraduates, apparently buying into Meehl’s “spun-glass theory” of their emotional fragility, we directed them to data from previous work showing that participants were unharmed by participating in such research, and we offered many reasons why the benefit/cost ratio of our work was high. Nonetheless, the IRB remained steadfast in their beliefs. Their basic assumption was that our trauma and sex questions were obviously “above minimal risk,” especially compared to standard measures such as cognitive tasks or IQ tests, and therefore required special protection of participants and a full rather than expedited review. These assumptions, although unsupported by data, delayed or derailed many research projects at our university, and dissuaded researchers and students from asking questions about “sensitive” topics. As our midprobationary and tenure reviews loomed, we worried that IRB obstructionism was becoming the single biggest obstacle to our academic careers.
As we looked for solutions to our dilemma, we found that other psychologists were having similar problems at their universities. Indeed, we found a growing literature on the largely benign effects of “sensitive”-topics research on participants across college, clinical, and community settings. This burgeoning literature signaled to us a growing concern over the increasingly procrustean regulatory practices of IRBs, a process referred to by sociologist Kevin Haggerty as “ethics creep.” Armed with these new references, we gave our IRB stronger data showing that trauma and sex surveys are fairly low risk and have high potential for both scientific and social benefits. In response, our IRB questioned the generalizability of these studies, suggesting that our New Mexico students were unusually vulnerable due to many coming from poor, rural communities, being Hispanic or Native American, or learning English as a second language. At this point, it was apparent to us that the only way to appease our IRB would be to conduct our own research, on our own students, about the effects of our own “sensitive” questions about trauma and sex. So that’s just what we did.
A key assumption of our IRB was that questionnaires asking about “sensitive” topics, such as trauma and sex, are above “minimal risk,” defined federally as the level of risk “encountered in daily life or during routine physical or psychological examinations,” such as standard cognitive tasks or IQ tests. We tested this assumption by assigning a large sample of undergraduates (over 500) to either a “trauma/sex” condition, which included an extensive battery of surveys on topics such as rape, childhood sexual abuse, casual sex, menstruation, and masturbation (assumed to be well above minimal risk by most IRBs), or a “cognitive” condition, which included many IQ-type tests of vocabulary, sequence completion, and abstract reasoning (defined to be minimal risk by federal authorities overseeing IRBs). We recruited University of New Mexico undergraduates for three reasons: to overcome our IRB’s generalizability worry that our students were unusually vulnerable, to complement previous research focused on participant reactions to similar surveys among clinical or community samples, and to model the undergraduate subject pools common in today’s psychology research. We assessed key areas of IRB concern: specifically, participants’ positive and negative emotional reactions, their perceived benefits and mental costs of participating, and their positive and negative moods before and after the study. We also asked participants to compare their distress from the study to the distress that would result from specific, normal life stressors (e.g., getting a cavity filled, losing money, failing an exam, getting fired from a job). Finally, we assessed whether negative reactions to our trauma/sex condition were stronger among those who presumably would be the most vulnerable to becoming distressed (i.e., women who had been sexually victimized).
Our findings revealed that IRB concerns about trauma and sex surveys, while well intentioned, are misguided. Although we asked hundreds of “sensitive,” personal, and even arguably outrageous questions in the trauma/sex condition, the vast majority of our participants — including women with a history of sexual victimization — reported not being distressed. Moreover, participants’ post-study negative affect was lower than their prestudy baseline. Notably, our trauma/sex participants, relative to the cognitive participants, rated the study as resulting in higher positive affect, having greater benefits (e.g., engendering more insight), and imposing fewer mental costs (e.g., less mentally exhausting).
While our trauma/sex participants did report slightly higher negative emotions after the survey than did cognitive participants, trauma/sex participants’ mean scores on our negative-emotion scale showed that they were, in fact, not distressed. Importantly, less than 4% of our participants reported that they felt negative emotions above our scale’s midpoint, and no participants indicated that they experienced a high level of distress. Finally, and most relevant to the “minimal risk” issue, participants in both conditions rated each of the 15 normal life stressors as more distressing, on average, than participating in the study.
For example, the great majority of students reported that our trauma/sex condition was less distressing than losing a $20 bill or waiting in line at a bank for 20 minutes. These results suggested that most trauma and sex surveys should be considered minimal risk by IRBs. They were no more distressing than the “minimal risk” cognitive tests, were less distressing than many normal life stressors, and were experienced as reasonably benign, even for participants assumed to be particularly vulnerable, such as victims of sexual violence.
Future Directions and Conclusions
We realize that some participants may be upset by some questions about some traumatic and sexual experiences. But the assumption that most participants will be upset about most of these questions seems wrong — as does the assumption that studies including such questions are above minimal risk and require full IRB review. Nonetheless, our findings, in conjunction with the results of similar studies that preceded ours, suggest that many IRB committees have systematically fallen prey to the “spun-glass theory of the mind” described by Meehl and have underestimated the resilience of adult research participants, including college students.
Certainly, more work is needed in this area. We collected a large amount of individual-differences data in our study (e.g., measures of psychopathology, personality traits, religiosity) and are currently identifying which, if any, of these variables predict more distress to research participants. Such studies can help guide more effective inclusion and exclusion criteria for recruiting participants for “sensitive”-topic studies. We hope that these kinds of participant-reaction studies can help researchers fight against the IRB “mission creep” happening in many of our research institutions. We suggest that IRB decisions about research protocols should be based not on gut reactions and moralistic assumptions about what today’s students might find offensive, but on the best current science about how participants actually respond to studies. We also encourage IRBs to take seriously the costs to science, society, and victims of failing to ask about traumatic experiences. Becker-Blease and Freyd (2006) made this argument:
In particular, psychologists have largely ignored the costs of not asking about abuse. As a result, there is the possibility that the social forces that keep so many people silent about abuse play out in the institution, research labs, and IRBs. To the extent that silence is part of the problem — silence impedes scientific discovery, helps abusers, and hurts victims — then this is no trivial matter.
It also may be important, when assessing risk, to remember the experience of today’s US college student. The average undergraduate is about age 20 and was born in the early 1990s. This generation grew up with South Park and Grand Theft Auto. They saw Oprah and Dr. Phil routinely discussing child sexual abuse, rape, incest, drug addiction, sexuality, and mental illness. Many students share intimate details of their lives on Facebook and YouTube. They’re exposed to TV shows depicting graphic levels of sex, violence, and trauma (e.g., Game of Thrones, Sons of Anarchy, Breaking Bad). Thus, previous studies, our study, and this cultural-historical context suggest that current undergraduates will not be harmed by answering questions that older members of IRB committees assume to be “sensitive” and “above minimal risk.”
Becker-Blease, K. A., & Freyd, J. J. (2006). Research participants telling the truth about their lives: The ethics of asking and not asking about abuse. American Psychologist, 6, 218–226.
Haggerty, K. D. (2004). Ethics creep: Governing social science research in the name of ethics. Qualitative Sociology, 27, 391–414.
Meehl, P. E. (1973). Psychodiagnosis selected papers. Minneapolis: University of Minnesota Press (pp. 225–302).
Yeater, E. A., Miller, G. F., Rinehart, J. K., & Nason, E. E. (2012). Trauma and sex surveys meet minimal risk standards: Implications for institutional review boards. Psychological Science, 23, 780–787.
Leave a comment below and continue the conversation.