We are watching Big Brother watching us. Whatever one thinks of Edward Snowden, hero or traitor or something in between, his revelations about sweeping NSA surveillance have gotten America’s attention. His whistle blowing has raised important questions about the balance of liberty and safety, and will heighten suspicions and scrutiny of the nation’s intelligence agencies for some time to come.
We hire and train intelligence agents to weigh risks and make judgments, and most of us want to believe that these assessments are sound. But how rational are the individual men and women who are making the life-and-death decisions that influence national security?
A new study raises some serious questions about our usual view of rationality, and how it applies to intelligence agents’ judgments about risk. Cornell University psychological scientist Valerie Reyna, one of the nation’s experts on risk assessment and decision making, persuaded a federal intelligence agency to let her study agents’ thinking. She found a pattern of irrational judgments about risk. In fact, college students were better than intelligence agents at weighing danger in a technical, logical way.
Reyna actually predicted that she would see these results. She is the originator of what’s called “fuzzy trace theory,” which posits that decision makers simultaneously confront problems in two very different ways. We deliberately and painstakingly calculate risk based on the quantitative information available—like solving a math problem—but we also process, very rapidly, the simple but meaningful “gist” of the situation. Since calculation is so taxing, in time and cognitive energy, gist thinking is often the best option, especially for decisions under pressure.
Gist thinking is paradoxical. For example, study after study has shown that children tend to employ slow and deliberate calculation, but as we get older, we rely more and more on rapid, impressionistic gist thinking. Similarly, experts in fields like finance and emergency medicine come to rely more on intuitive gist thinking, the more experienced they are. This developmental “reversal” is well documented but counterintuitive, since we expect maturity and experience to improve all cognitive performance.
Based on this body of evidence, Reyna predicted such a cognitive reversal in intelligence analysts as well. She recruited volunteers from an unnamed federal intelligence agency, mostly special agents with an average of seven years with the agency. For comparison, she also recruited a group of college students and another group of post-college adults. She tested all the volunteers on a series of what are called framing problems, which assess the tendency to make risky choices. Here’s an example:
A dread disease is threatening a town of 600, and you have the authority to make choices. Do you: Save 200 people for sure, or choose the option with 1/3 probability that 600 will be saved and a 2/3 probability that none will be saved? Or, alternatively, do you pick the option where 400 will surely die, or instead a 2/3 probability that all 600 will die and a 1/3 probability that nobody dies?
These two choice scenarios are identical, except that one is framed in terms of gain, the other in terms of loss. A fundamental tenet of decision making theory is that rational people are consistent in their choices, regardless of whether the odds are framed as gain or loss. But many people switch in this scenario from risk seeking to risk avoiding. Fuzzy trace theory says that this is the result of focusing on the “good” gist—all saved, or none die. Even explained this way, however, it’s nevertheless a cognitively biased form of decision making—and not what one would expect in a professional intelligence agent.
But that’s precisely what Reyna found in her experiment, described in a forthcoming article in the journal Psychological Science. Based on 30 gain-loss framing decisions, not only did the federal agents exhibit larger framing biases than college students, they were also more confident in their judgments. The post-college adults occupied an interesting middle ground between the students and agents: They were as flawed in their choices as the students—sometimes more so—but less cognitively biased than the intelligence agents.
These results show that experienced intelligence agents think irrationally about risk and loss, even when human lives are at stake. If it’s any comfort, Reyna concludes that this distorted judgment is the ironic consequence of a cognitively advanced style of thinking, an intuitive style perhaps more suitable for finding meaning in the murky world of spies and counterspies.
Selections from Wray Herbert’s blogs—“We’re Only Human” and “Full Frontal Psychology”—appear regularly in The Huffington Post and elsewhere.