Gotcha Moment

When science is taken out of context

Robert DeRubeis

Robert DeRubeis

In 2010, my colleagues and I published a study on the treatment of depression in a high-visibility journal. It was covered in hundreds of news outlets including countless blogs. Many physicians told me that it was the topic of conversation and debate in the week if not the month after it appeared in the Journal of the American Medical Association (JAMA). What follows is a brief chronicle of our experience with the publication of the paper, and the aftermath.

On November 18, 2009, while working in my de facto sabbatical office, the Genius Bar in an Apple store in Sydney, Australia, an email arrived from JAMA’s editor, Richard Glass. He was letting my colleagues and me know that our paper would be published if, within four days, we could submit a revised manuscript that addressed the 29 issues raised by reviewers. Was being 18 hours out of sync with then-graduate student and the paper’s first author, Jay Fournier, an advantage or a disadvantage? I did not have enough time to decide the answer, but I knew that in the succeeding days we would rely heavily on the Genius Bar’s facilitation of our Skype conversations, Dropbox-supported tag team exchanges of drafts, and countless rounds of emails.

The findings of our meta-analysis were as simple as their context and implications were complex. In a combined sample of 758 patients, from the only 6 extant, available datasets that could address our research question, we found that:

1.     Antidepressant medications show little if any advantage in their reduction of depressive symptoms, relative to pill-placebos, in patients with non-chronic “mild” or “moderate” major depressive disorder;

2.     The evidence for an advantage of medicine over placebo among those with “severe” depression was equivocal; and

3.     Consistent with the vast pharmacotherapy literature that is based primarily on studies of patients with more severe depressions, the advantage of the drugs used to treat the disorder was substantial for those who began treatment with “very severe” depression.

One month after the receiving Glass’s email, I was back in my Penn office to meet with Jay, who that morning had departed Pittsburgh for the five-hour drive to Philadelphia. We were to participate in the production of video materials that would accompany JAMA’s press release, as ours had been selected as the featured paper in theJanuary 6 edition. (It was becoming clear that the first months of 2010 would bring a great deal more attention from the press and the public than anything I had experienced before.) My hope was that the media would use our findings as an opportunity to shed light, with a minimum of heat, on a complex and important problem. I agreed to dozens of requests for interviews, including those from national outlets such as NPR, Newsweek, andUSA Today, but also from hosts of radio call-in shows in Los Angeles and Pittsburgh. My aim in accepting those invitations was to learn from the experience as well as to disseminate our findings. And learn I did.

Many years ago, the message we received from reviewers of an NIMH grant application I had submitted with my colleague Steven Hollon was loud and clear: Our plan to conduct a large randomized, placebo-controlled comparison of antidepressants vs. cognitive therapy in the treatment of major depressive disorder should, as is common practice in pharmaceutical trials, focus on the more severely depressed patients, and exclude those with mild-to-moderate symptoms. Our research team was frustrated and puzzled. We knew that most patients diagnosed with major depressive disorder, and therefore the majority of those to whom the medicines are marketed, are not severely depressed. How could treatment research continue to neglect such a large subset of the depressed population?

A Gotcha Moment
The handling of our findings by the media varied widely, even within a given outlet. Alarming teasers bombarded viewers of CNN Tonight for what turned out to be, later in the hour, a responsible accounting of our work and its implications, including an interview of me by the host. Although a few of her questions might have been designed to lure me into saying more than the research could support, my psychiatrist colleagues who co-authored theJAMA paper tell me the story, and the interview, represented our findings faithfully. But my colleagues were appalled by the TV teasers, as was Judith Warner, who led off with a finger-wagging at CNN in her January 9, 2010, New York Times op-ed, “The Wrong Story About Depression.”

A Few days later the Science section of the Times featured an impassioned warning about our work, entitled “Before You Quit Antidepressants…” In it, Cornell psychiatrist Richard Friedman urged readers to dismiss our “provocative claim…,” describing it as “confusing, if not alarming…,” as it had “contradicted literally hundreds of well-designed trials, not to mention considerable clinical experience, showing antidepressants to be effective for a wide array of depressed patients.” He encouraged readers to be suspicious of our “so-called meta-analysis,” as such analyses “can be tricky.” The remainder of the piece listed various quibbles, all of which we had encountered in the JAMA review process, which was the most rigorous of any such vetting that my research has ever undergone. We had, of course, addressed most of Friedman’s complaints in the paper itself.

The “gotcha” moment came when, during a phone conversation with Friedman in which I had hoped to help him write a great article, I stated, responsibly if naively, that “…we can’t know that [the results] generalize” to medications other than those included in our analyses. Only a portion of my statement was quoted, out of context, and although I may have been made to look foolish to some readers, most, I suspect, took into account the evident bias in Friedman’s piece. What readers never learned was that Friedman remained silent when I asked him if he (or any expert he knew) would have any reason to expect that a different pattern would be evident with other antidepressants. The Times printed our rejoinder a week later, but I cannot help thinking that most readers were left confused, not edified. A theme in Warner’s and Friedman’s articles (and in several other articles) was the concern that our work would lead some patients not to start—or to stop — taking medicines that would have helped them. It would not surprise decision scientists to hear that in none of these pieces was it acknowledged that some, indeed perhaps far more, patients who would have been given medicines unnecessarily would be spared the expense and the side effects. And a subtle but crucial point was also lost: Many of those helped by medications miss out on the opportunity to learn how to manage their affective disorder behaviorally and cognitively, a lesson that can be invaluable over the course of the illness.

The implications of these negative media reports were clear: If one person is adversely affected by our findings, the publication and publicizing of our paper was a mistake. Such messages were present, but generally not so heavy-handedly, in pharmaceutical-company-sponsored websites, such as Medscape. I am often asked if I have received “unfriendly” messages from pharmaceutical companies, or if I have been their target in media reports. In fact, the industry has known for decades what Friedman acknowledged in his piece: that “hundreds of well-designed trials” have mostly “focus[ed] on severely ill patients.” I have assumed that, in part for this reason, the pharmaceutical industry has not raised strong public objections to our findings.

However, every cloud has a silver lining. Throughout this year I have addressed groups that probably would not have asked me to present our work if our study hadn’t received such publicity. The Congressional Biomedical Research Caucus invited me to Capitol Hill in mid-March, and I taped a “1:2:1” podcast for the Stanford University School of Medicine website. (Previous guests had included Francis Collins and Maria Shriver, so my decision to be on the podcast was rather an easy one.) When I presented to a group of OB/GYN specialists at the Reading (PA) Hospital, they shared their concern about frequent requests for antidepressant medications from their patients, many of whom were self-diagnosed, and some of whom would leave their request as a voicemail. These doctors yearned for better information, for themselves and for their patients, about how to address depression other than by taking medications.

This is a crucial challenge for our health care system. Depression is a major public health problem. Policymakers, treatment providers, and patients need unbiased research to inform treatment questions; responsible dissemination of this information by the press is critical. We will make progress only if we question current practices, fear no data, and resolve to address important questions with rigorous, clinically informative research.

Fournier, J.C., DeRubeis, R.J., Hollon, S.D., Amsterdam, J.D., Shelton, R. C., & Fawcett, J. (2010).
Antidepressant drug effects and depression severity: A patient-level meta-analysis. Journal of the
American Medical Association
, 303, 47-53.

Observer Vol.24, No.1 January, 2011

Leave a comment below and continue the conversation.

Comments

Leave a comment.

Comments go live after a short delay. Thank you for contributing.

(required)

(required)