Cover Story

The Mechanics of Choice

Hardly a minute goes by in our lives when we don’t make them. Decisions can be as small as our choices of words or what to have for lunch, and they can be as big as how to plan for retirement or what treatment to choose for a disease. They can balance certainties against risks. They can balance short-term gratification against long-term benefits. They can clearly be right or wrong — but often enough, they involve likelihoods and possibilities that are uncertain, even in the light of all available information.

Psychological scientists have been interested in how people make decisions for several decades, but philosophers and economists have been studying decision making for centuries. The most famous scholarly consideration of making a decision in cases when all the facts aren’t on hand is that of Blaise Pascal. In 1670, in his Pensées, the French philosopher articulated what was, in his time, a pretty profound dilemma for rational people: to believe or not believe in the existence of God. Pascal reasoned it out this way: If God exists, belief in Him will mean eternal salvation. If He doesn’t exist, Pascal said, one loses nothing by believing. So the choice was clear: Believing is the safest bet. And if you don’t believe, you should pretend to believe, because in so doing you might come around to genuine belief in time.

Pascal’s famous wager is the first formulation of what in the study of decisions came to be known as the theory of expected value: When faced with a choice between uncertain alternatives, you should determine the positive or negative values of every possible outcome, along with each outcome’s probability, and then you should multiply the two and choose the option that produces the highest number.

It sounds simple, but choices in the real world are seldom that cut-and-dried. Expected value was given more nuance by Daniel Bernoulli in 1738 with his theory of expected utility. Along with the values and probabilities of different uncertain outcomes, the Dutch-Swiss mathematician noted, there are two individual factors that would also be taken into account by any rational decision maker — his or her comfort with or aversion to risk, and the utility of a given payoff depending on his or her preferences or needs. Value, in other words, isn’t an absolute. For example, a small monetary gain would be of greater utility to a poor person than to a rich person, and thus their decisions in a gamble could be entirely different but equally rational.

From Economics to Psychological Science

The prediction of social behavior significantly involves the way people make decisions about resources and wealth, so the science of decision making historically was the province of economists. And the basic assumption of economists was always that, when it comes to money, people are essentially rational. It was largely inconceivable that people would make decisions that go against their own interests. Although successive refinements of expected-utility theory made room for individual differences in how probabilities were estimated, the on-the-surface irrational economic behavior of groups and individuals could always be forced to fit some rigid, rational calculation.

The problem is — and everything from fluctuations in the stock market to decisions between saving for retirement or purchasing a lottery ticket or a shirt on the sale rack shows it — people just aren’t rational. They systematically make choices that go against what an economist would predict or advocate.

Daniel Kahneman

Daniel Kahneman

Enter a pair of psychological scientists — Daniel Kahneman (currently a professor emeritus at Princeton) and Amos Tversky — who in the 1970s turned the economists’ rational theories on their heads. Kahneman and Tversky’s research on heuristics and biases and their Nobel Prize winning contribution, prospect theory, poured real, irrational, only-human behavior into the calculations, enabling much more powerful prediction of how individuals really choose between risky options.

One keystone of prospect theory is loss aversion, or the discovery (based on numerous experiments reported in a classic article in the journal Econometrica1) that winning $100 is only about half as appealing as losing $100 is unappealing. The idea that the relationship between value and losses/gains is nonlinear — or put more simply, that “losses loom larger than gains” — is important for decisions involving risks, and opens the door for framing effects, in which the context and phrasing of a problem can influence a person’s choice.

Something as simple as whether a problem is phrased in terms of winning or losing can radically affect our decisions. In one of their studies, Kahneman and Tversky presented two groups of participants with a choice involving hypothetical medical treatments for a deadly disease. The first group was told that if a certain treatment was given to 600 people with the disease, 200 people’s lives would be saved; if they were given another, riskier treatment, there was a 1/3 chance that all 600 would be saved and a 2/3 chance of saving no one. The second group was given the exact same choice, but it was framed in terms of lives lost instead of in terms of lives gained: The certain option meant 400 people would die for sure; the risky treatment meant a 1/3 chance no one would die and a 2/3 chance all 600 would die. The majority of the first group chose the certain option: saving 200 people. The majority of the second group chose the risky option, gambling on the prevention of all the deaths even though it was only a 33% shot.2

In short, the terms you use to present a problem strongly affect how people will choose between options when there are risks involved. As shown in the medical treatment problem, people may seek a sure solution if a problem is phrased in terms of gains, but will accept risk when a problem is phrased in terms of (potentially) averting a loss.

Taking Shortcuts

Real-world problems are often complicated — it’s tough to think objectively about all the variables, and often enough we just don’t know what the odds of different outcomes are. Our brains are naturally wired to reduce that complexity by using mental shortcuts called heuristics. Kahneman and Tversky and other researchers have identified numerous ways humans simplify decisions via heuristics and the biases such shortcut-thinking can produce.

One important heuristic is known as representativeness — the tendency to ignore statistics and focus instead on stereotypes. For example, Steve is described by a former neighbor as a helpful but shy and withdrawn soul who loves structure and detail and takes little interest in people or the real world. When faced with a list of possible occupations that includes farmer, salesman, pilot, doctor, and librarian, people tend to predict Steve is a librarian because he fits a commonly held stereotype. They ignore what ought to be an obvious fact — that there are many, many more farmers in the world than there are librarians. Ignoring base rates, as well as other statistical blind spots like not paying attention to sample sizes and simple misconceptions concerning chance, can lead to serious errors in judgment.3

Another heuristic, known as anchoring, is people’s tendency to stay close to a starting point when making an estimate — even when they know that the starting point could be way off the mark. In one experiment, a wheel was spun in front of participants, yielding a number from 1 to 100; participants were then asked to estimate what percentage of U.N. countries were in Africa by moving upward or downward from the random number. The median estimate for a group whose starting number was 10 was that 25 percent of U.N. countries were African; the median estimate for the group whose starting number was 65 was nearly twice that: 45 percent. (The correct answer is 28 percent.) The starting number significantly biased the estimate, even though participants knew that the number was purely arbitrary.4

Valerie Reyna

Valerie Reyna

People also estimate the likelihood of an event based on the ease with which it comes to mind, or its availability. For example, a clinician sees a depressed patient who says he is tired of life. Recalling other cases of depression he has seen, the clinician may remember one salient event: a depressed patient who committed suicide. He may thus estimate the current patient’s probability of committing suicide as relatively high, even though the majority of depressed patients do not attempt suicide. The relative availability of that one suicide in the doctor’s memory, in other words, biases him to overestimate the likelihood of such an outcome in the present case and perhaps treat the patient accordingly.5

Another important heuristic is the affect heuristic — the tendency of people to assess probabilities based on how they feel toward particular options. According to decision researchers Paul Slovic (Decision Research, Oregon) and Ellen Peters (Univ. of Oregon), people judge an option as less risky if their feelings toward it are favorable, and they consider an option more risky if their feelings toward it are less positive. Those feelings may not correspond to real-world risks. For example, people tend to fear radiation from nuclear power plants more than they fear radiation from medical X-rays, yet it is actually X-rays that pose a greater risk to health.6

Fast and Slow

Those who study how people make decisions often draw a distinction between two types of mental processing used. A fast, unconscious, often emotion-driven system that draws from personal experience is contrasted with a slower, more deliberative and analytical system that rationally balances benefits against costs among all available information. The fast, gut-level way of deciding is thought to have evolved earlier and to be the system that relies most on heuristics. It is this system that produces biases.

Univ. of Toronto psychologist Keith E. Stanovich and James Madison Univ. psychologist Richard F. West refer to these experiential and analytical modes as “System 1” and “System 2,” respectively. Both systems may be involved in making any particular choice — the second system may monitor the quality of the snap, System-1 judgment and adjust a decision accordingly.7 But System 1 will win out when the decider is under time pressure or when his or her System-2 processes are already taxed.

This is not to entirely disparage System-1 thinking, however. Rules of thumb are handy, after all, and for experts in high-stakes domains, it may be the quicker form of risk processing that leads to better real-world choices. In a study by Cornell University psychologist Valerie Reyna and Mayo Clinic physician Farrell J. Lloyd, expert cardiologists took less relevant information into account than younger doctors and medical students did when making decisions to admit or not admit patients with chest pain to the hospital. Experts also tended to process that information in an all-or-none fashion (a patient was either at risk of a heart attack or not) rather than expending time and effort dealing with shades of gray. In other words, the more expertise a doctor has, the more that his or her intuitive sense of the gist of a situation was used as a guide.8

In Reyna’s variant of the dual-system account, fuzzy-trace theory, the quick-decision system focuses on the gist or overall meaning of a problem instead of rationally deliberating on facts and odds of alternative outcomes.9 Because it relies on the late-developing ventromedial and dorsolateral parts of the frontal lobe, this intuitive (but informed) system is the more mature of the two systems used to make decisions involving risks.

A 2004 study by Vassar biopsychologist Abigail A. Baird and Univ. of Waterloo cognitive psychologist Jonathan A. Fugelsang showed that this gist-based system matures later than do other systems. People of different ages were asked to respond quickly to easy, risk-related questions such as “Is it a good idea to set your hair on fire?”, “Is it a good idea to drink Drano?”, and “Is it a good idea to swim with sharks?” They found that young people took about a sixth of a second longer than adults to arrive at the obvious answers (it’s “no” in all three cases, in case you were having trouble deciding).10 The fact that our gist-processing centers don’t fully mature until the 20s in most people may help explain the poor, risky choices younger, less experienced decision makers commonly make.

Adolescents decide to drive fast, have unprotected sex, use drugs, drink, or smoke not simply on impulse but also because their young brains get bogged down in calculating odds. Youth are bombarded by warning statistics intended to set them straight, yet risks of undesirable outcomes from risky activities remain objectively small — smaller than teens may have initially estimated, even — and this may actually encourage young people to take those risks rather than avoid them. Adults, in contrast, make their choices more like expert doctors: going with their guts and making an immediate black/white judgment. They just say no to risky activities because, however objectively unlikely the risks are, there’s too much at stake to warrant even considering them.11

Making Better Choices

The gist of the matter is, though, that none of us, no matter how grown up our frontal lobes, make optimal decisions; if we did, the world would be a better place. So the future of decision science is to take what we’ve learned about heuristics, biases, and System-1 versus System-2 thinking and apply it to the problem of actually improving people’s real-world choices.

One obvious approach is to get people to increase their use of System 2 to temper their emotional, snap judgments. Giving people more time to make decisions and reducing taxing demands on deliberative processing are obvious ways of bringing System 2 more into the act. Katherine L. Milkman (U. Penn.), Dolly Chugh (NYU), and Max H. Bazerman (Harvard) identify several other ways of facilitating System-2 thinking.12 One example is encouraging decision makers to replace their intuitions with formal analysis — taking into account data on all known variables, providing weights to variables, and quantifying the different choices. This method has been shown to significantly improve decisions in contexts like school admissions and hiring.

Having decision makers take an outsider’s perspective on a decision can reduce overconfidence in their knowledge, in their odds of success, and in their time to complete tasks. Encouraging decision makers to consider the opposite of their preferred choice can reduce judgment errors and biases, as can training them in statistical reasoning. Considering multiple options simultaneously rather than separately can optimize outcomes and increase an individual’s willpower in carrying out a choice. Analogical reasoning can reduce System-1 errors by highlighting how a particular task shares underlying principles with another unrelated one, thereby helping people to see past distracting surface details to more fully understand a problem. And decision making by committee rather than individually can improve decisions in group contexts, as can making individuals more accountable for their decisions.13

In some domains, however, a better approach may be to work with, rather than against, our tendency to make decisions based on visceral reactions. In the health arena, this may involve appealing to people’s gist-based thinking. Doctors and the media bombard health consumers with numerical facts and data, yet according to Reyna, patients — like teenagers — tend initially to overestimate their risks; when they learn their risk for a particular disease is actually objectively lower than they thought, they become more complacent — for instance by forgoing screening. Instead, communicating the gist, “You’re at (some) risk, you should get screened because it detects disease early” may be a more powerful motivator to make the right decision than the raw numbers. And when statistics are presented, doing so in easy-to-grasp graphic formats rather than numerically can help patients (as well as physicians, who can be as statistically challenged as most laypeople) extract their own gists from the facts.14

Elke Weber

Elke Weber and the Dalai Lama

Complacency is a problem when decisions involve issues that feel more remote from our daily lives — problems like global warming. The biggest obstacle to changing people’s individual behavior and collectively changing environmental policy, according to Columbia University decision scientist Elke Weber, is that people just aren’t scared of climate change. Being bombarded by facts and data about perils to come is not the same as having it affect us directly and immediately; in the absence of direct personal experience, our visceral decision system does not kick in to spur us to make better environmental choices such as buying more fuel-efficient vehicles.15

How should scientists and policymakers make climate change more immediate to people? Partly, it involves shifting from facts and data to experiential button-pressing. Powerful images of global warming and its effects can help. Unfortunately, according to research conducted by Yale environmental scientist Anthony A. Leisurowitz, the dominant images of global warming in Americans’ current consciousness are of melting ice and effects on nonhuman nature, not consequences that hit closer to home; as a result, people still think of global warming as only a moderate concern.16

Reframing options in terms that connect tangibly with people’s more immediate priorities, such as the social rules and norms they want to follow, is a way to encourage environmentally sound choices even in the absence of fear.17 For example, a study by Noah J. Goldstein (Univ. of Chicago), Robert B. Cialdini (Arizona State), and Vladas Griskevicius (Univ. of Minnesota) compared the effectiveness of different types of messages in getting hotel guests to reuse their towels rather than send them to the laundry. Messages framed in terms of social norms — “the majority of guests in this room reuse their towels” — were more effective than messages simply emphasizing the environmental benefits of reuse.18

Yet another approach to getting us to make the most beneficial decisions is to appeal to our natural laziness. If there is a default option, most people will accept it because it is easiest to do so — and because they may assume that the default is the best. University of Chicago economist Richard H. Thaler suggests using policy changes to shift default choices in areas like retirement planning. Because it is expressed as normal, most people begin claiming their Social Security benefits as soon as they are eligible, in their early to mid 60s — a symbolic retirement age but not the age at which most people these days are actually retiring. Moving up the “normal” retirement age to 70 — a higher anchor — would encourage people to let their money grow longer untouched.19

A starker example of the power of defaults is provided by psychologists Eric J. Johnson (Columbia Univ.) and Daniel Goldstein (Yahoo! Research). In many European countries, individuals are automatically organ donors unless they opt not to be — organ donation is the default choice. In most of these countries, fewer than 1 percent of citizens opt out. The opposite is true in the United States. Although about 85 percent of Americans say they approve of organ donation, only 28 percent give their consent to be donors by signing a donor card. The difference means that far more people in the United States die awaiting transplants.20

In a world ever more awash in choices, people become constant deciders. And the stakes of our decisions — and the consequences of errors — are growing. As Milkman, Chugh, and Bazerman put it, “Errors induced by biases in judgment lead decision makers to undersave for retirement, engage in needless conflict, marry the wrong partners, accept the wrong jobs, and wrongly invade countries.” They go on:

In a knowledge-based economy . . . a knowledge worker’s primary deliverable is a good decision. In addition, more and more people are being tasked with making decisions that are likely to be biased because of the presence of too much information, time pressure, simultaneous choice, or some other constraint. [And] as the economy becomes increasingly global, each biased decision is likely to have implications for a broader swath of society.21

In such a world, understanding and improving decision making will decidedly become a greater and greater priority for psychological scientists of all stripes.

Comments

I really enjoyed reading this article as I am somewhat of a system one thinker. I got a lot out of it. I like reading and learning something new,so this article caught my eye.

I enjoyed this article a lot. It brought up a lot of facts about how and why we make certain decisions and still kept it interesting for the average reader.

I enjoyed the part were it talked about fascal in 1670, I have a similar method of finding the probability of best possible outcome using basic math, mine is called a “will-be-or” tho and I use it as a variable in situation.


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.