Why Psychotherapy Appears To Work (Even When It Doesn’t)

psychology session sign vector

One of the classic papers in the history of psychology is Hans Eysenck’s “The Effects of Psychotherapy: An Evaluation,” published in 1952. The London-based psychologist examined 19 studies of treatment effectiveness, dealing with both psychoanalytic and eclectic types of therapy in more than 7000 cases. His overall conclusion was damning: The studies, he wrote, “fail to prove that psychotherapy, Freudian or otherwise, facilitates the recovery of neurotic patients. They show that roughly two-thirds of a group of neurotic patients will recover or improve to a marked extent within about two years of the onset of their illness, whether they are treated by means of psychotherapy or not.”

Eysenck noted, somewhat wryly, that these findings are encouraging for the neurotic patient—but not so welcome from the point of view of the psychotherapist. He also predicted that therapists would react emotionally to his proof, based on their strong feelings and beliefs in their effectiveness, concluding: “In the absence of agreement between fact and belief, there is urgent need for a decrease in the strength of belief, and for an increase in the number of facts available.”

He was right about the emotional reaction, although it probably would surprise him to know that it persists even today. The number of available facts about scientifically validated treatments has increased dramatically in the 62 years since Eysenck’s evaluation, yet many therapists still insist that their informal clinical observations and intuitions are proof enough of therapy’s power.

Eysenck did not attempt to explain why therapists’ beliefs are so resistant to proof—it was beyond the scope of his analysis. But now a group of psychological scientists are attempting to do just that. Emory University’s Scott Lilienfeld, working with colleagues at five other universities, argues that therapists are subject to the same cognitive biases that skew all human thinking. Rigorous scientific thinking does not occur naturally, so such biases cause therapists to infer and believe in outcomes that really have no proof.

Consider those two-thirds of neurotic patients who, Eysenck found, get well on their own. Lilienfeld and colleagues believe that Eysenck overestimated the rate of spontaneous remission, but the fact is that a fair number of people with psychological problems do get better on their own, for a variety of reasons. People mature, or tonic life events occur outside therapy—or people feel better for no apparent reason. This is all good for the patient—as Eysenck noted—but the spontaneity is rarely seen as such by therapists. Instead, they claim (and truly believe) that any improvement must be the consequence of something they did in the consulting room.

Misinterpreting spontaneous remission is one of what Lilienfeld and the others call “causes of spurious therapeutic effectiveness,” or CSTEs. Writing in the current issue of the journal Perspectives on Psychological Science, the scientists provide a taxonomy of 26 CSTEs, only some of which involve taking credit for changes unrelated to therapy. Others may be misinterpretations of real changes that result from incidental things in treatment. For example, patients may improve simply because they are excited about being in therapy—the “novelty effect”: Indeed, it’s said that 15 percent of patients improve between the initial phone call and the first session.

Still other CSTEs are misperceptions of change where none really occurs: For example, patients learn to verbalize their problems in richer detail, which may seem like improvement, yet the problems still persist and cause distress. Alternatively, patients may tell therapists what they think the therapists want to hear, leading to the (false) perception of therapeutic improvement.

The authors attribute all of these misinterpretations of effectiveness to four broad cognitive biases, well known in the social cognition literature:

Naïve realism. This is the ubiquitous assumption that the world is precisely as we see it. This heuristic, or mental shortcut, leads us to focus on what is most obvious and to ignore other, subtler facts.

Confirmation bias. This is the common, deeply ingrained tendency to seek out evidence consistent with one’s hypothesis, and to ignore or distort any evidence to the contrary. So a therapist may use a particular intervention, and mentally note only the sessions in which the patient showed improvement—forgetting those where the patient did worse.

Illusory causation. This is the powerful propensity to see cause-and-effect where none exists. This is what causes therapists (and patients) to misconstrue spontaneous remission as therapeutic effectiveness.

Illusion of control. This related cognitive bias is the tendency to overestimate one’s ability to shape events. It predisposes therapists to believe they possess more power than they do.

These four overarching cognitive biases lead to all sorts of irrational thinking, including the 26 specific types of CSTE listed in this article. Taken together, they underscore the need for rigorous scientific design and controlled research—and less intuition—in clinical decision making. The often lamented gap between science and practice is in essence a clash over these beliefs. The authors believe that the reluctance of some therapists to adopt evidence-based practices does not reflect low intelligence or willful disregard of the evidence. Rather, it stems from an erroneous belief that evidence from clinical observation is as trustworthy as evidence from controlled scientific study—or what Eysenck labeled beliefs and facts more than six decades ago.

Follow Wray Herbert’s reporting on psychological science in The Huffington Post and on Twitter at @wrayherbert.



I perceive two prominent problems with this analysis. First, not mentioned is regression to the mean, which accounts for a substantial (but difficult to quantify) proportion of apparent “improvement.” That is, for any disease, from arthritis to depression, people seek help when their symptoms are at a peak. That is, the different trajectories of illness are temporarily aligned at a high level of symptoms. However, over time, individual trajectories following help seeking disaggregate. Thus, even if the trajectories before and after help seeking are unchanged, there will be a regression of the peak mean value at presentation to the long-term mean, which is lower, or “better.”
Second, the biggest problem with most so-called EBPs is their focus on technique, rather than context. The cause of the large effect sizes seen with most psychotherapies are largely unexplained, and, with a few prominent exceptions, do not appear to be related to specific techniques. The “contextual” interpretation of psychotherapy, a la Bruce Wampold, is that there are a multiple of contextual factors that likely account for change. One of these is the strength of belief of the therapist in what she does. This stands in stark contrast to recommendations to de-emphasize the human interaction in favor of technique.
Academic psychologists have pursued the technical approach in an attempt to replicate pharmaceutical trials, make a name for themselves, and legitimize psychotherapy. That attempt, in my view, has been an almost utter failure (except for the making a name for themselves part). Unfortunately, because of their own cognitive biases, they are of course convinced of the correctness of their assertions. Perhaps they should start first with examining their assumptions. But then, humility was never a prominent characteristic of academics.

Leave a Comment

Your email address will not be published.
In the interest of transparency, we do not accept anonymous comments.
Required fields are marked*