Currently in the United States, a prostate cancer drug is being touted in a novel way: The claimed primary benefit of the drug is not that it reduces the risk of the disease, but rather that it reduces the risk of being treated for the disease. “Men are getting screened, discovering that they have cancers that may or may not be dangerous, and opting for treatments that can leave them impotent or incontinent… Preventing the cancer can prevent treatments that can be debilitating, even if the cancers were never lethal to start with” (Kolata, 2008, p. A1). Such is the state of doctor–patient communication: Most doctors and virtually all patients are unschooled in how meaningfully to compare the risks of foregoing versus undergoing treatment, and the patient’s frantic desire to “do something now” often trumps the doctor’s ancient commitment to “first, do no harm.”
This editorial appears in the Psychological Science in the Public Interest report “Helping Doctors and Patients Make Sense of Health Statistics” by Gerd Gigerenzer, Wolfgang Gaissmaier, Elke Kurz-Milcke, Lisa M. Schwartz, and Steven Woloshin (Vol. 8, No. 2).
The problem that Gigerenzer, Gaissmaier, Kurz-Milcke, Schwartz, and Woloshin address in this exemplar of interdisciplinary and international collaboration could hardly be more pressing. Marshalling one study after another, they demonstrate that, across widely varying samples of health professionals, patients, and policymakers, in all countries studied, statistical illiteracy reigns supreme — often with catastrophic consequences for individual and public health. The media function as enablers of this problem. For example, while the author of the newspaper article quoted in the previous paragraph showed unusual journalistic insight by noting the irony of inoculating people against treatment rather than against disease, she still expressed the benefit of the drug in terms of relative risk reduction (“dropping the incidence [of prostate cancer] by 30 percent”; Kolata, 2008, p. A1) rather than in the much-less-dramatic terms of absolute risk reduction recommended in these pages.
Clearer risk communication would go far in helping patients make informed and intelligent trade-offs between the costs and the benefits of various medical interventions, including the nonintervention of “let’s wait and see.” Innumeracy, as this monograph makes abundantly clear, is an enormous societal problem. Even if statistical literacy could be achieved by following the sensible and feasible recommendations of Gigerenzer and his colleagues, however, the issue of what people take to be credible “evidence” in evidence-based medicine and public health would remain. And part of the problem may have less to do with differences in numeracy than with differences in values. Here, the concept of “cultural cognition” (Kahan & Braman, 2006) may have a valuable explanatory role to play. Cultural cognition refers to a series of social and psychological mechanisms that induce individuals to conform their factual beliefs about contested practices — including contested medical practices — to their cultural evaluations of the activities that these practices promote or discourage. As a result of such processes, people with different cultural worldviews may form different empirical beliefs about which practices ameliorate medical problems and which ones compound them.
Consider the heated debate over the mandatory vaccination of school-age girls for the human papillomavirus (HPV). Some see this policy as essential to the health of young women, among whom exposure to HPV, the primary cause of cervical cancer, is widely prevalent. Others take the position that the HPV vaccine will give teenage girls a false sense of immunity that will lead to their engaging in unprotected sex and thus increase their risk of contracting HIV-AIDS. Cultural cognition theory predicts that the latter position will be more common among individualists — who are likely to view it as displacing private healthcare decision making — than among communitarians; and more common among hierarchs — who are likely to understand it as evincing tolerance for the denigration of traditional sexual mores — than among egalitarians. This prediction has been borne out by recent large-scale survey and experimental research (Kahan, Braman, Slovic, Gastil, & Cohen, 2007). Distressingly, the provision of empirical information on vaccinating against HPV serves only to make cultural disparities in risk–benefit perceptions more pronounced. Whether the transparent communication of health information advocated by Gigerenzer and his colleagues can attenuate the effects of cultural predisposition on people’s receptivity to new empirical evidence remains for research to determine.
The implications of the findings reported in this monograph go far beyond the doctor-patient relationship. Statistical illiteracy is endemic in courtrooms as well as in consultation rooms. The late U.S. Supreme Court Justice Lewis Powell was merely more honest than most of his colleagues when he stated — in regard to a case in which quantitative evidence was introduced to support a claim that the death penalty was disproportionately imposed on murder defendants whose victims were White — “my understanding of statistical analysis … ranges from limited to zero” (quoted in Jeffries, 1994).
The point is not that judges and juries are as statistically illiterate as doctors and patients. That would be bad enough but potentially remediable by the creative educational and presentational strategies recommended by Gigerenzer and colleagues, both in this monograph and in Gigerenzer and Engel (2006) and by others (e.g., Hans, 2007). Rather, the point is that at its doctrinal core, tort law in many American jurisdictions not only discourages but actively penalizes physicians who practice the kind of evidence-based medicine recommended in this monograph. Normally, a defendant’s compliance with “industry customs” is one — but only one — factor for a jury to consider when determining whether the defendant was negligent. Since the late 19th century, however, American courts have treated physician-defendants quite differently than other defendants: Medical customs are not merely admissible to determine the physician’s legal standard of care, they actually define the physician’s legal standard of care. The custom-based standard “gives the medical profession the privilege, which is usually emphatically denied to other groups, of setting their own legal standards of conduct, merely by adopting their own practices” (Keeton, Dobbs, Keeton, & Owen, 1984). In many states, proving the standard of care means proving only what physicians customarily do under similar circumstances. Whether there is any empirical basis to support this usual care, or indeed whether the care usually given does more good than harm, is beside the point.
In this regard, the authors of the present monograph recount the sad case of Dr. Merenstein, a medical resident in Virginia who was sued because he did not automatically order a prostate-specific antigen (PSA) test for a patient. Merenstein followed the evidence-based guidelines of virtually all major medical organizations and informed the patient about the risks and benefits of PSA testing and let the patient make his own decision. The man declined to take the test, and later developed an incurable form of prostate cancer. The plaintiff’s attorney called expert witnesses who plausibly claimed that, for male patients over 50, most physicians in the state routinely do a PSA test without informing the patient or obtaining his consent. The jury found Merenstein’s residency program liable for $1 million in damages. As Dr. Merenstein later stated,
It is often claimed that malpractice is a mechanism for holding physicians accountable and improving the quality of care. This case illustrates quite the opposite: punishing the translation of evidence into practice, impeding improvements to care, and ensconcing practices that hurt patients. In our system, the physicians who are slow to change are the winners. (Merenstein, 2004, p. 15)
This Alice-in-Wonderland situation is changing, however. Peters (2002, p. 913) argues that “gradually, quietly, and relentlessly, state courts are abandoning the custom-based standard of care.” In about a dozen jurisdictions, the jury decides whether the physician behaved “reasonably,” not whether he or she did things the way they’ve always been done. “Although experts still battle in the courtroom, they argue about what physicians should do, not what physicians ordinarily do… The centrality of this doctrinal shift cannot be overstated” (p. 920). As this belated move from custom-based to evidence-based liability takes place, it will become plain that what physicians should do is precisely what they now ordinarily do not do: communicate to patients the risks and benefits of alternative forms of health promotion and treatment in the transparent formats offered so clearly and defended so convincingly in this remarkable monograph. ♦
Gigerenzer, G., & Engel, C. (Eds). (2006). Heuristics and the law. Cambridge, MA: MIT Press.
Hans, V. (2007). Judges, juries, and scientific evidence. Journal of Law and Policy, 16, 19–46.
Jeffries, J. (1994). Justice Lewis F. Powell, Jr: A Biography. New York: Scribner.
Kahan, D., & Braman, D. (2006). Cultural cognition and public policy. Yale Law and Policy Review, 24, 149–172.
Kahan, D., Braman, D., Slovic, P., Gastil, J., & Cohen, G. (2007). The second national risk and culture study: Making sense of — and making progress in — the American culture war of fact. (Public Law Working Paper No. 154). New Haven, CT:Yale Law School.
Keeton, W., Dobbs, D., Keeton, R., & Owen, D. (1984). Prosser and Keeton on Torts (5th ed.). St. Paul, MN: West Publishing Company.
Kolata, G. (2008, June 15). New take on a prostate drug, and a new debate. The New York Times, p. A1.
Merenstein, D. (2004). Winners and losers. Journal of the American Medical Association, 292, 15–16.
Peters, P. (2002). The role of the jury in modern malpractice law. Iowa Law Review, 87, 909–969.
APS Fellow John Monahan is the John S. Shannon Distinguished Professor of Law and a Professor of Psychology and Psychiatric Medicine at the University of Virginia. Since 1986, he has directed two large research networks supported by the John D. and Catherine T. MacArthur Foundation in the area of mental health law.
Taking the Scary Out of Breast Cancer Stats
By Carol Tavris and Avrum Bluming
American women fear breast cancer more than heart disease, according to most studies, even though heart disease is responsible for 10 times as many female deaths every year — and heart disease deaths exceed breast cancer deaths in every decade of a woman’s life.
Of women who are diagnosed early with breast cancer, more than 90 percent will survive, and most will not need disfiguring mastectomies or even chemotherapy. But the media understand how deeply women fear breast cancer, and the result is that every study that seems to find a link between some new risk factor and the disease makes headlines everywhere, captures public attention, and stimulates the blogosphere into overdrive.
Grapefruit is the most recent culprit. According to a study in the Journal of the American Medical Association, eating a quarter of a grapefruit a day increases the risk of breast cancer by 30 percent. For many women, grapefruit immediately was toast.
To assess these studies for their real-life implications, let alone for making decisions about our own behavior, the public needs to understand the difference between absolute risk and relative risk. If we tell you that the relative risk of breast cancer is increased by 300 percent in women who eat a bagel every morning — Relax! It’s not! — that sounds alarming, but it is not informative. You would need to know the absolute numbers of bagel-eating breast cancer patients. If the number shifted from one in 1,000 women to three in 1,000 women, that is a 300 percent increase, but it’s meaningless. If the risk had jumped from 100 women to 300, we might reasonably be concerned.
In the large epidemiological studies that generally include tens of thousands of people, it is very easy to find a small relationship that may be considered “significant” by statistical convention but that, in practical terms, means little or nothing. For example, in July 2002, the Women’s Health Initiative reported a 26 percent increase in breast cancer risk for women on hormone replacement therapy, which sounded worrisome. Even if that number were statistically significant — and it was not, by the way — this is what it translates into: The risk of breast cancer would increase within the studied population from five in 100 women to six in 100 women.
We now have a fat file folder of all the studies we could find that have reported an association between some purported risk factor and breast cancer. Of these, the ones that got the most attention were three Women’s Health Initiative reports. In 2002, investigators found an increased relative risk of 26 percent from using combined estrogen and progesterone; in 2003, it was 24 percent; and in 2004, the relative risk from using estrogen alone was minus 23 percent (suggesting it was protective against breast cancer).
To put those findings in perspective, consider these published studies showing the increased relative risk of breast cancer from:
Why was there no call for Icelandic flight attendants to quit (or transfer to Lufthansa), for black women to use electric blankets for more than six months a year but only for nine years, or for labeling antibiotics as carcinogens? Because these findings, which were improbable to begin with, were never replicated. In contrast, the increased relative risk of lung cancer from smoking is consistently between 2,000 percent and 3,000 percent. That’s a finding that means something.
Unfortunately, good news doesn’t travel as fast as fear does. In 2006, the Women’s Health Initiative investigators reanalyzed their data and found that the risk of breast cancer among women who had been randomly assigned to take hormone replacement therapy was no longer significant. Women assigned to take a placebo but who had used hormone replacement therapy in the past actually had a lower rate of breast cancer than women who had never taken hormones.
This reassuring but non-scary news did not make headlines. Neither did the real findings from the March 2008 Women’s Health Initiative report, which followed women in the sample who had stopped taking hormones for the previous three years. The researchers reported that the risk of cardiovascular events, malignancies, breast cancers and deaths from all causes was higher in the hormone-replacement-therapy group than in the placebo group even three years after stopping the therapy — pretty alarming. But when we read the article closely, we saw that not one of the associations between hormone replacement therapy and breast cancer, or between the therapy and mortality rates from any cause, was statistically significant. Unfortunately, this did not stop the investigators from highlighting their negative findings as meaningful and troubling, and that is what most of the media picked up.
No wonder the public, assaulted by numbers and frightening headlines, alternates between panic and cynicism. Physicist Richard Feynman once said, “If something is true, really so, if you continue observations and improve the effectiveness of the observations, the effects stand out more obviously, not less obviously.”
The association between hormone replacement therapy and breast cancer becomes less obvious with every study. We all want to understand the risk factors in breast cancer that are “really so,” but to do that, we have to give up entrenched beliefs when the data do not support them and look elsewhere. In the meantime, enjoy your grapefruit. ♦
This article was originally published in the Los Angeles Times on April 17, 2008.