The philosopher John Stuart Mill famously proposed that moral decisions are made according to a principle of utilitarianism: Moral decision makers perform a sort of cost-benefit analysis in an attempt to maximize benefits and minimize harm. According to this view, it’s okay to kill one person if, by doing so, you save the lives of several other people. By contrast, Immanuel Kant believed that morality is not about costs and benefits, but rather about our duty to moral principles and lines that must not be crossed: Harming one person to save five others will not seem morally acceptable if your gut tells you that harming people is wrong.
In his invited address at the 23rd APS Annual Convention, Joshua Greene discussed how this classic philosophical debate reflects competing neural and cognitive systems in the brain that are only now coming to light. The human brain has no distinctive moral faculty; instead, moral judgments involve a confluence of systems that are not dedicated to moral thinking. Greene and his colleagues propose a dual-process morality, with two systems that correspond to evolved responses to different situations. When people consider only how bad an action will make them feel, their intuitive emotional response is governed by the amygdala. However, when people take into account both their emotional assessment and a utilitarian calculation, processing occurs in the ventromedial prefrontal cortex (VMPFC), an area of the brain responsible for calculating risk and making decisions. This kind of consideration apparently causes the affective response in the amygdala to be processed by the VMPFC before it translates into behavior, resulting in more utilitarian moral decisions.
Our judgments about the expected moral value of our actions appear to be governed by the ventral striatum, a central part of our reward circuitry, which developed over the course of human evolution in response to things like food and sex. Prehistoric humans naturally assigned a higher value to the first piece of fruit they ate than the fifth. The first piece was valuable because it would sate their appetite, but that value would decrease with each successive piece. Unfortunately, this same reward system is involved in the way we value the lives of people we might help or harm. A rural visitor to a city might be profoundly affected by the sight of a homeless person, whereas a seasoned city-dweller will be inured to the plight of homeless people after passing them every day.
Proximity is another key factor in moral decision making: People who feel compelled to help a person suffering right in front of them are less inclined to help people suffering on the other side of the world. The more distance there is between you and a suffering person, the less acutely you will feel you need to help them. Similarly, people feel better about harming someone if they don’t have to employ physical force and if the victim can’t see them.
These adaptive, intuitive responses helped our ancestors to survive over the course of evolutionary history, but nowadays they can act as stumbling blocks to utilitarian social aims. Greene provided one example of a group of people who have (accidentally) trained themselves to make more utilitarian decisions — Buddhist monks. In a small study, he found that the utilitarian moral decisions made by monks increased as a function of the number of hours they spent meditating each day. Similarly, he found that people in public health — who tended to focus on increasing the health of many people — were more likely to make utilitarian decisions than were doctors, who tended to focus on the more immediate well-being of individual patients. In one study, Greene and his colleagues Joe Paxton and Leo Ungar used cognitive reflection tests — tests in which the answers to questions are counterintuitive — to prime people before they made moral judgments, and found that as people took their counterintuitive judgments into account and gave more correct answers, they also made more utilitarian moral decisions. These findings indicate that it’s possible to adjust a sense of moral intuition that evolved as a response to only immediate concerns, and thus to extend the benefits of our actions to reach a wider group of people.