Let me say at the onset that institutional review boards (IRBs) serve an important role and generally do a pretty good job at it. To be sure, everyone engaged in research has their IRB stories, such as being told that their recruitment poster must put the payment amount in smaller font or remove it altogether so that it won’t be “coercive,” being asked to tape record the verbal assent of illiterate Tzeltal Maya adults, having the very same consent form that has sailed through the review process many times rejected, and so on. But these are minor issues that can be remedied with a little patience and understanding.
The situation could be far more challenging. What if IRBs were to decide that undergraduates could no longer be recruited for research studies because social pressure made it very awkward for students to decline to participate? (From my earlier column on field research, you might suspect I would think this is a good thing, but you would be very much mistaken.) What if they determined that developmental research with children must end because parental consent impinges on the child’s autonomy, but at the same time deciding that children are not mature enough to give informed consent? I suspect that either decision would lead to an uproar in psychology.
The point of these dire examples is that IRBs are constantly making value judgments. If we agree with these judgments, there is no problem, and we are scarcely aware of them. But the more our field strives for real-world relevance, the more challenging these value judgments may become. The remainder of this column contains a provocative commentary by my collaborator and close friend, Scott Atran, whose research often bears on policy questions. You likely won’t agree with everything he says, but I’m betting you’ll concede that he’s raising important issues. Scott is Research Director in Anthropology at the National Center for Scientific Research in Paris, Visiting Professor of Psychology and Public Policy at the University of Michigan, and Presidential Scholar in Sociology at the John Jay College of Criminal Justice in New York City. Take it away, Scott.
Science and (IRB) Policy: The Case of Terrorism Research. Many bemoan the academic community’s lack of input and influence in shaping society’s understanding and actions regarding one of the most pressing issues of our time: terrorism. In an article in The National Interest, “Thinking Outside the Tank,” senior Rand analyst Steve Simon and counter-terrorism expert Jonathan Stevenson (2004) surmised that “scholars are now farther than ever from furnishing creative analytical support to policymakers,” and they recommended that academics should be left to their irrelevance. In some instances, the irrelevance might be intentional: Social scientists in academia may fear that their intellectual interests, integrity, and independence could be corrupted by close ties to policy and policymakers. They might suggest that academics should stay above the fray.
But one increasingly strong impediment to academic involvement in understanding where terrorism comes from and what to do about it may not be from any lack of will or interest, but from a bureaucratic institution that rigidly regulates university research: the IRB. In the social and behavioral sciences, IRB standards are geared to protecting American university students, or children usually living within a few miles of a university — that is, populations vastly overrepresented in research but not in the real world. One consequence, I believe, is a dampening effect on policy-relevant research in the social sciences.
There are very few scholars who directly talk to terrorists or those who inspire and care for them, although there’s no end to elaborate theories and voluminous books on the subject. But I’m an anthropologist who believes in the principle spelled out by Isaac Newton in a letter to Nathaniel Hawes: “If, instead of sending the observations of able seamen to able mathematicians on land, the land would send able mathematicians to sea, it would signify much more to the improvement of navigation and the safety of men’s lives and estates on that element.”
On the basis of this principle, joined to solid research proposals, the National Science Foundation and Department of Defense independently granted considerable U.S. taxpayer money to our cross-disciplinary, multi-university, international team to interview violent extremists in different settings and run experiments with them on a range of theoretical issues, including how group dynamics can trump individual personality in motivating suicide bombers and how sacred values can limit rational choice with cultural taboos that block tradeoffs and attempts at compromise and negotiation. We ourselves have sacred values: We don’t sell our children, and we don’t put a price tag on our freedoms. But what is sacred and nonnegotiable for violent extremists, and what policy implications follow?
In my experience, the IRB has been a significant obstacle to addressing these questions. In what follows, I’ll describe my experiences with the University of Michigan (UM) IRB, not to unduly single it out but rather to illustrate the IRB tendency toward conservatism. First of all, the IRB decided that you can’t interview failed suicide bombers or their sponsors in prison, because prisoners cannot, in principle, freely give informed consent. This decision holds even for lawfully convicted mass killers who publicly tout their crimes, such as the 2002 Bali bombers. This is the conclusion of the IRB, despite the fact that the prisoners and the organizations that originally sponsored their actions willingly gave their consent because they were eager to get their ideas out.
So what about interviewing freely operating jihadis and would-be suicide bombers? Initially, the IRB decided that federal funds could not be used, despite well-accepted guarantees of complete anonymity, because subjects might inadvertently reveal operational plans that could put them in jeopardy. Although any statement of consent would expressly inform subjects not to talk about operations, the IRB’s argument was that government intelligence services, or others, might find out about the interviews to identify and use against subjects. I do believe it’s reasonable for the IRB to bar a researcher from asking about current or future terrorist operations because such information could put the interviewer in an impossible ethical bind with respect to whether they should inform authorities so that innocent lives might be spared. But it seems unreasonable to prevent research under any situation where avoidance of ethical dilemmas is not foolproof.
As it turns out, in August 2005, I did inadvertently find out about the formation of a rogue Jemaah Islamiyah suicide squad, thoifah moqatilah (fighting group), and vague plans to attack Western targets, possibly tourist spots in Bali again. And I did report this to the U.S. Senate’s Committee on Foreign Relations at a briefing on my research in September 2005, shortly before the October Bali bombing. If the suicide bombers had been stopped because of the information I inadvertently obtained, then by UM’s moral logic, I would have been ethically remiss by disrespectfully violating the bombers’ wishes in helping to save their lives and the lives of their intended victims.
I also apparently failed to make a sufficient case of the “costs and benefits” to the university and society of the research, though I pointed out that my interviews with radical Islamist leaders resulted in fruitful contacts during a crucial Middle East ceasefire negotiation and that any lives saved should count as a net benefit of the research. More generally, helping to understand why someone would want to blow up Manhattan, London, Tel Aviv, or Jakarta could help to prevent these cities from getting blown up, and that, too, would be a pretty good benefit. But that argument was rejected.
After many months, the IRB decided to release emergency National Science Foundation funds awarded for “high risk research” to do pilot interviews with freely operating jihadis. Then, a new UM IRB (IRBs are not required to have institutional memory) withdrew permission to carry out the research on matters that had been previously approved and against which no new objections had been raised. Although our initial research results were tentative, as in almost any research project, our preliminary results were published in reputable scientific outlets, including Science and Nature magazines (Atran & Axelrod, 2010; Atran & Ginges, 2009).
Worse still, the IRB decided that any proposal for the analysis of secondary data must be treated as a new proposal, because the implications of previously collected data or findings related to human subjects may be different from those originally foreseen. This is a chilling constraint that has the potential to stop a research program dead in its tracks, in the face of any politically correct wind, no matter how advanced that research or how distant from any people living or dead. And so I gave up on the UM IRB.
I think the major problem lies with the IRB as an institution. Perhaps one remedy is that certain kinds of IRB approvals should be taken at some interuniversity or even national level that guarantees greater institutional memory and uniformity of procedure (UM told me that Harvard’s approval of our protocols was irrelevant). Of course, a broader board must be protected from the political riptides of the electoral cycle. The advantage of a national board is that their sponsors could be government agencies whose interests focus more on their mission (such as national security) than on protection of undergraduate students. A national board (or boards) could then use guidelines that would differ from those designed to protect interests of typical subjects. There are various ways to define the domain of a national board, such as “prisoners and those hiding from the law.” Alternatively, it could be defined as using subjects who are relevant to national security (under a very narrow formulation). One would have to decide whether, for example, studies of urban gangs in the United States should be covered or not by a national board.
In testimony before the House Science Committee, M. R. C. Greenwood (2002), Chancellor of the University of California, Santa Cruz, argued that “balancing the perceived risks of open access with the risks to the health and vitality of the research community is exactly the kind of issue that calls for a new partnership between the research community and the government.” That partnership is lacking when it comes to dealing with terrorism, in part because universities and the government have chained themselves to an institution that never imagined dealing with suicide bombers and which lacks the flexibility and imagination to face the problem. Yet suicide bombers are here; they’ve burst upon the world and, along with their sponsors and supporters, are changing how societies seek security and interact.
So IRBs, let the scholars go out to sea — not just to ply the current waves of violent extremism and political upheaval in unfamiliar places, but wherever research in the field could plausibly challenge standard beliefs. Without such challenges, research findings may remain superficial or plain wrong, no matter how much they are replicated close to home.
References and Further Reading:
Atran, S., & Axelrod, R. (2010, June 30). Why we talk to terrorists. New York Times. Retrieved from
Atran, S., & Ginges, J. (2009, January 25). How words could end a war. The New York Times. Retrieved from
Greenwood, M. R. C. (2002, October 10). Testimony before the House Science Committee: “Conducting
Simon, S., & Stevenson, J. (2004, December). Thinking outside the tank. The National Interest. Retrieved from
Leave a comment below and continue the conversation.