Closing the Science-Practice Gap

This article is part of a series commemorating APS’s 25th anniversary in 2013.

The Association for Psychological Science is an organization of which I am proud to be a member, in no small measure because it has played a vital role in narrowing the science-practice gap — the sharp divide between the research literature concerning clinical interventions and its application to clients. [1]  In the following paragraphs, I offer some thoughts regarding the past, present, and future of clinical science, and focus on challenges that confront us when attempting to bridge the schism between science and practice. Because I incline toward candor, my remarks will be blunt and at times provocative, but I hope they will be constructive.

The Science-Practice Gap: Grounds for Optimism

At times, those of us who are advocates of clinical science have harped on the negative. However, I too have occasionally been guilty of focusing on the negative. Certainly, there are ample grounds for frustration, as the science-practice gap remains too wide. Among other things, survey data indicate that:

  • large proportions of clinicians who treat patients with anxiety disorders do not use exposure-based methods;
  • many or most children with autism spectrum disorders receive scientifically unsupported interventions;
  • most individuals with major depressive disorder receive suboptimal treatment; and
  • sizeable pluralities of clinicians continue to administer suggestive techniques, such as hypnosis and guided imagery, to recover purportedly repressed memories of abuse (Baker, McFall, & Shoham, 2008; Lilienfeld, Ritschel, Lynn, Cautin, & Latzman, in press).

At the same time, things are looking up. Propelled largely by the wave of evidence-based practice (EBP) in medicine, clinical psychology has been dragged — at times kicking and screaming — into EBP, which is traditionally regarded as a three-legged approach that incorporates research evidence, clinical expertise, and client preferences and values (Spring, 2007). The most influential operationalization of EBP’s research prong is its instantiation as empirically supported therapies (ESTs), treatments found to be efficacious for a given disorder in independently replicated controlled trials.

ESTs are controversial and disliked by many (e.g., Bohart, 2002). Some criticisms of ESTs, such as the fact that they rely on statistical significance at the expense of clinical significance and artificially dichotomize treatments as either empirically supported or unsupported, have been constructive (e.g., Herbert, 2003; Westen & Bradley, 2005). Others, such as the objection that ESTs unduly restrict freedom in clinical decision making, strike me as less compelling, because science works by reducing uncertainty in our inferences (McFall & Treat, 1999).

In essence, science tells us that we cannot believe or practice anything and that our views must be constrained by evidence. In this respect, ESTs are a decided improvement over what we had before — which was little more than a vapid imploration to clinicians to “practice scientifically” — but they remain a crude operationalization of EBP’s research prong. Therapeutic efficacy falls along a continuum, and some treatments will surely prove to be more successful for certain features of specific disorders than for others. In the coming decade, I hope that our field capitalizes on the momentum generated by the EST movement to develop more nuanced and quantitatively sophisticated approaches for operationalizing therapeutic efficacy (e.g., Miller & Wilbourne, 2002).

Another reason for optimism is the heightened emphasis on accountability in health care (Porter, 2010). Regardless of one’s views on the Affordable Health Care For America Act (colloquially called Obamacare), we can all agree that in the coming decades, psychologists will experience mounting pressures to demonstrate the effectiveness of their treatments (Barlow, 2004). Such forces will compel us to adopt evidence-based interventions, whether we like it or not. In this regard, I hope that quality improvement technologies, which aim to enhance treatment by monitoring errors in practice and correcting them, will come to play a more central role in mental health care (O’Donohue, Ammirati, & Lilienfeld, 2011).

Challenges to Clinical Science

Still, closing the science-practice gap will be difficult. This divide has been with us at least since the late 19th century (Cautin, 2009) and has manifested itself in a host of fields in addition to clinical psychology, including social work, counseling, and psychiatry. This lengthy history suggests that the science-practice divide originates from deep-seated attitudinal differences. Although this divide is often framed as stemming from the fact that one side values evidence more than the other, this distinction oversimplifies matters. This divide in fact reflects fundamental differences in how individuals conceptualize “evidence” to begin with, with proponents on one side regarding rigorous, controlled data as the most valid source of evidence for clinical claims, and proponents on the other regarding subjective observations and intuition as the most valid source of evidence (see also McHugh, 1994). In the coming years, bridging the science-practice divide should be a major priority for APS and our field at large. I envision three sets of potential impediments confronting us along the way: scientific challenges, communication challenges, and training challenges.

Scientific Challenges

The EST movement presumes the existence of clear-cut differences in the efficacy of treatments for specific disorders. Yet, some researchers argue that overall differences in efficacy across psychotherapies are minimal (Wampold, 2001), a conclusion dubbed the Dodo Bird verdict. This is named after the Dodo Bird in Alice in Wonderland, who declared following a race that “everybody has won, and all must have prizes.” Evidence demonstrates that the Dodo Bird verdict, at least in its strict form as an assertion of the null hypothesis for all treatment-by-disorder interactions, is false. For example, exposure therapies are especially efficacious for anxiety disorders (Chambless & Ollendick, 2000; Tolin, 2010).

Nevertheless, it is unwise to dismiss the critics’ counterclaims too cavalierly, as our field has sometimes done. They have a point: Finding evidence for treatment specificity for specific disorders has not been easy, and nonspecific factors, such as the therapeutic alliance, account for nontrivial amounts of variance in therapy outcome (but see Feeley, DeRubeis, & Gelfand, 1999). It will be essential for advocates of clinical science to delineate the boundary conditions under which specific therapeutic techniques (e.g., exposure, behavioral activation) matter and those under which they do not. For some disorders, treatment generality will probably prove to be the rule; for others, treatment specificity will almost surely prevail. For example, depression is largely a disorder of demoralization and may respond to a host of therapies that activate the brain’s reward circuitry (see Blood et al., 2010); in contrast, obsessive-compulsive disorder and anxiety disorders may respond primarily to exposure treatments.

Communication Challenges

Those of us who are proponents of clinical science need to realize that our ultimate goal is to persuade. But frankly, we have a communication problem. At times, some of us in academia, myself included, have come across as overly strident or dismissive of individuals on the clinical side of the science-practice divide. In other cases, we have been reluctant to take seriously the concerns of clinicians regarding the pragmatic challenges of transporting ESTs to real-world settings (Lilienfeld et al., in press; Shafran et al., 2009). By all means, we should be assertive in our advocacy of evidence-based interventions; but we should also be respectful of those who do not share our perspectives and mindful of the fact that we can learn from them, especially with regard to the difficulties of implementing evidence-based treatments on the front lines of practice.

More broadly, we should recall that scientific thinking does not come naturally to any of us (Lilienfeld, 2010; McCauley, 2011). As a consequence, opposition to evidence-based approaches is to be expected, even from thoughtful students and colleagues. The best antidote to such resistance is not scorn or derisiveness, but patient and forceful persuasion. Science is a prescription for modesty (McFall, 1996), and we should try to model this humility for our colleagues and students. In communicating with those on the other side of the science-practice schism, we would be well-advised to heed Spinoza’s dictum: “I have made a ceaseless effort not to ridicule, not to bewail, not to scorn human actions, but to understand them” (Shermer, 1997, p. 61).

Training Challenges

All faculty members in clinical psychology graduate programs are aware of a dirty little secret: Although our students are very bright, some are simply not turned on by conducting research. We have often forsaken these students, which I regard as a serious mistake. We sorely need more practitioners who can deliver evidence-based treatments and model scientific thinking for their colleagues. Hence, reaching out to would-be practitioners who are not researchers at heart should become a priority for clinical science instructors. We need to make clear to these students that we welcome and embrace them if they operate as scientists in the clinical setting, delivering assessment and therapeutic techniques based on research evidence and striving to reduce errors in clinical inference.

The new Psychological Clinical Science Accreditation System (PCSAS) has much to recommend it: By recognizing outstanding clinical graduate programs that train high-quality researchers, it should hopefully ratchet up our disconcertingly low training standards. Yet, I have a concern. PCSAS recognizes only research-oriented programs; it neglects practitioner-oriented programs that are doing a solid job of training their students to think and practice scientifically. As the PCSAS initiative progresses, we must remain cognizant of the pressing need to train clinicians to think and operate scientifically.

Finally, I worry about several recent trends in graduate training that risk undermining the scientific basis of our profession. Increasing numbers of clinical graduate programs are de-emphasizing broad-based coursework and instead emphasizing the acquisition of specialized research skills, such as brain imaging techniques. These tools are immensely valuable for some purposes. Yet, we must be wary of hyper-specialization. We should aim to produce clinical scientists who not only possess specialized expertise in at least one substantive domain, but also possess sufficiently broad knowledge to conceptualize research and clinical problems in proper perspective and to collaborate fruitfully with investigators in allied areas.

Let us also not forget about our past (Benjamin & Baker, 2009). Few graduate programs in clinical psychology teach their students much about the history of psychology anymore. (Over my futile objections, my own department at Emory University recently voted to eliminate the history of psychology requirement.) Some instructors seem to believe that anything published prior to about 1980 is out of date. Yet, students must appreciate psychology’s grand traditions and key animating debates so that they do not perpetually reinvent the wheel. And to minimize their risk of errors in research and clinical practice, they must learn about our discipline’s previous mistakes and understand how we came to correct them.

Reaching the Skeptics

The science-practice gap remains problematic, but there are reasons to be sanguine. Recent demands for health-care accountability will necessitate a heightened emphasis on evidence-based practice in clinical psychology and allied fields. To close the science-practice gap, however, we must redouble our efforts to reach individuals who are skeptical of a scientific approach to clinical practice and to persuade them that science, although imperfect, is ultimately our best safeguard against human fallibility. APS is well-positioned to play a leading role in that effort.

Footnote

[1] Many authors, myself included, have frequently used the phrase “scientist-practitioner gap,” but I have come to regard this terminology as misguided. This term implies erroneously that the roles of the scientist and practitioner are mutually exclusive. My central thesis is that one should always be operating as a scientist regardless of the setting in which one works; that is, one should be attempting to compensate for biases by drawing on systematic safeguards against them (see also McFall, 1991). Return to text.


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.