The argument of many of these columns to date is that psychological science needs to figure more in worlds of public policy formulation. But I have found myself, a psychologist whose career has been concerned with experiment and theory, strikingly confused about things to say that would help shape public policy. I assume I am not alone in this predicament.
One thing I do know: We need concrete examples of how we enter the policy-making process more than we need general exhortations to do so. That is, we need examples that make clear the barriers involved and the transformations we would have to put our knowledge through to have impact on policy. And, we need stories from people who have successfully breached those barriers. To get such a story, I contacted one of my most adventurous former students. His interesting account follows.
As President of The Hastings Center, Tom Murray heads the premier institute that carries out analyses of ethical issues in a way that bridges the gap between “what is” and “what is possible,” how we think about issues, and more “philosophical” ethical concerns. At Hastings, social science and ethics are made to speak to one another – offering us a model of psychology in the public policy arena that is valuable in its own right and may also provide a model for us to think about as we connect with other disciplinary perspectives in addressing policy decisions. This world of ethical policy formulation has found it appropriate to be led by a psychologist: A quite informative fact.– John Darley, APS President
I went in search of Psychology and discovered Ethics. Then I went in search of Ethics and rediscovered Psychology. Or, more precisely, I discovered that I had never left Psychology behind, and that my doctoral training in Psychology provided insights into the complexities of practical moral conundrums.
At least a handful of psychologists have made the journey to prominence in bioethics, including Ruth Faden at Johns Hopkins University and Adrienne Asch at Wellesley College. My own sojourn began in graduate school in social psychology at Princeton when I was charged with running a deception study of helping behavior. The reactions of my subjects left me as shaken and disturbed as the experience frequently left them. The moral qualms I felt followed me through my first faculty positions and led eventually to two post-doctoral fellowships, one at Yale to read ethics, and the second at The Hastings Center – the world’s first and leading bioethics research institute.
In 1979, The Hastings Center was a decade old, and a steady stream of famous scholars had passed through its doors, usually as members of the various interdisciplinary research groups cobbled together to contemplate vexing moral problems in medicine and the life sciences. Philosophers, theologians, physicians, and biological scientists made frequent appearances, and – every once in a while – a social scientist would arrive, usually a political scientist, sociologist or anthropologist. There must have been the occasional psychologist in the crowd, but it is difficult to recall any.
Too often, the interactions between social scientists and philosophers were futile and frustrating exercises in mutual unintelligibility. Philosophers were trained to map the intellectual landscape, parse whatever interesting concepts they found there, and articulate and critically evaluate ethical arguments. Philosophers were, with rare exceptions, not trained to create, interpret, or critique empirical studies. Social scientists, on the other hand, understood how to frame and answer certain kinds of empirical questions – those within the purview of their field and methodologies – but they were often mystified by the forms of reasoning and argument employed by philosophers. What does a Kantian distinction between heteronomy and autonomy have to do with whether physicians should tell patients they have cancer? (In 1979, whether to tell the truth about a grave diagnosis such as cancer was still a contentious issue within medicine.)
An uneasy truce existed between scholars who traffic in normative claims – right and wrong, good and bad – and social researchers unfamiliar with philosophical methods and uncomfortable with claims that cannot be empirically tested. In part the difficulties were based on habits of the mind and heart, in part on struggles over intellectual supremacy and political and economic power. (The role of anthropologists, sociologists and medical historians in medical education was diluted by the arrival of philosophers and theologians. The resentments persist to this day.) Social scientists upbraided philosophers for failing to take into account social context and structure, culture, power, and the findings of empirical social research. Philosophers mocked social scientists for their clumsiness at formulating moral arguments and their reluctance to reach moral conclusions. The so-called “Naturalistic Fallacy,” that what “is” cannot by itself determine what ought to be, seemed an unbridgeable chasm separating the normative and the empirical.
Or so at least many people thought. The reality turns out to be much richer and vastly more interesting.
Take one of the first studies in which I became involved when I arrived at The Hastings Center. The National Science Foundation had funded a proposal to study ethical and conceptual issues in non-therapeutic drug use. Half of the inquiry focused on so-called “drugs of abuse.” My half, in contrast, looked at drugs used not to treat or ameliorate disease but rather to enhance performance. Where do people use drugs as performance enhancers? Setting aside for now the caffeine in my coffee that keeps me alert, the most widespread and best documented use of performance enhancing drugs was in sport, especially high-level sport such as the Olympics and professional leagues such as the NFL.
The conceptual problems were knotty: What distinguishes enhancement from therapy? What forms of enhancement are acceptable, even praiseworthy? What forms are unethical? What, for that matter, makes something a drug? (One scientist member of the research group offered that in her lab the operational definition of a drug was “any substance that, injected into a rat, yields a scientific paper.”) The ethical issues at first glance seemed equally difficult. Given the enormous preference our culture has for individual liberty, what could count as a sound moral reason for limiting an athlete’s freedom to use whatever means of performance enhancement she or he wants? Most opponents of performance enhancing drugs in sport appealed to paternalistic reasons: Athletes will hurt themselves if they use drugs. Unfortunately, this argument has many flaws. Not all drugs used to enhance performance are known to be harmful (although many probably are, certainly in the doses used by athletes). In some sports athletes run greater risks merely by participating – careening down a snowy mountain with a couple of thin boards strapped to your feet; colliding at full speed with 300 pound goliaths with a football tucked under your arm. Prohibiting athletes from taking uncertain risks from drugs seems, at a minimum, inconsistent. In any case, paternalism misses the very point of individual liberty: Whether to accept risks should be that person’s choice based on her or his balancing of risks and benefits, values to be pursued and evils avoided.
As a psychologist, I listened carefully to the athletes themselves, their coaches, trainers, and physicians. Athletes were not taking drugs as glorious declarations of their transcendent freedom. They took drugs because they believed their competitors were taking them. They took drugs because they had devoted their lives to achieving excellence in their sport, and they feared they would become also-rans, losing to people they could have bested had the playing field been level. Those who took performance enhancing drugs often felt they had no option, sometimes because their coaches or national federations, as in the former East Germany, ordered them to take the drugs or lied about what they were taking, or more commonly, because they suspected that their competitors were gaining an unfair and unearned advantage. Other athletes refused to use drugs. Some of those dropped out of the competition; some continued to compete. Some who persevered won, but fewer than if the playing field had been truly level.
The key insight was to recognize the contribution of social knowledge – about organizations, hierarchies, life plans, reference groups, persuasion, conformity, culture, expectations – to moral discernment. My training in social psychology enabled me to see what might otherwise have appeared as background noise to a philosopher; my later training and experience in philosophy and bioethics allowed me to incorporate those social insights into ethical analyses and arguments.
The project on performance enhancing drugs in sports may have been the first time that I saw the complementary power of psychology and moral philosophy. It would not be the last as my work wandered into other issues such as organ transplantation, genetics, reproductive technologies, and cloning. And now, after 20 years, I may finally have the opportunity to update and refine my research into the ethics of performance enhancement in sports, as a new source of possible funding has emerged. Patience, they say, is a virtue. So, I believe, is bringing psychology into intimate and sustained dialogue with bioethics.