One of the major changes in clinical psychology during the past decade has been the growing call for — and the rise of — empirically supported treatments (ESTs). With this change has come a need to re-evaluate the standards for empirical support and the way we appraise treatments.
At this year’s Clinical Science Forum, researchers David Tolin (The Institute of Living), Evan Forman (Drexel University), APS Fellow Dean McKay (Fordham University), and Brett Thombs (McGill University, Canada) discussed the future of ESTs.
In the early 1990s, a taskforce spearheaded by the Society of Clinical Psychology created a widely used set of guidelines for determining if a treatment is empirically supported or not. This taskforce also compiled a list of treatments considered to be empirically supported. Housed on the Society of Clinical Psychology’s website, the list since has expanded to include additional treatments that meet the taskforce’s standard of evidence; however, according to the panelists, the current criteria for empirical support are too weak. Another problem is the lack of a clear way to remove a treatment from the list if some future circumstance calls into question empirical support for the treatment.
The panel pointed out that, under the current standards, a typical patient is unlikely to receive evidence-based treatment. Speakers indicated a need to create a bridge between the research community and practitioners to identify and promote efficacious treatments.
Beyond the desire to provide the best care, clinicians have other reasons to worry about the use — or lack thereof — of ESTs. More and more people — especially those outside the psychological science community — see clinical psychology as a health-care field. Thus, clinical psychology must adhere to the same types of regulations as other medical disciplines. Medical fields face increasing regulation, and clinical psychology may be next to experience scrutiny. The panel expressed concern that the inability to support treatments with solid science will cause problems when clinical science is put under the regulatory microscope.
These concerns have led several groups to consider making improvements to the ways interventions are studied and assessed for clinical efficacy. The NIH’s Research Domain Criteria (RDoC) project represents a new way to conceptualize psychopathology research. The RDoC matrix organizes research related to psychopathology and treatment. It also helps scientists identify areas where more research is needed. The American Psychological Association (APA) has proposed its own solution, suggesting massive systematic literature reviews examining the efficacy of individual treatments. Panels of experts would then review and discuss the studies — and their findings — and make recommendations to practitioners about the usefulness of the treatments.
Panel members criticized the type of review proposed by APA for being painfully slow. “The field can’t very well wait decades for this to come out,” said Symposium Chair Tolin. He noted that psychological science needs a rubric to guide it through current challenges: “Is there some way we can bridge the gap between what we currently have and what we aspire to have?”
Bridging this gap will require new ways to conceptualize and study treatments’ effectiveness. The panel questioned how to use and interpret data from long-term studies currently in progress or from studies where multiple comorbidities are present. Because comorbid diagnoses are increasingly common, scientists are working to understand the extent to which treatments for one disorder transfer to other disorders and how — or if — transfer ability should be taken into account when determining efficacy.
The panel also discussed how clinical psychologists measure improvement, noting that many trials only evaluate symptom reduction. In contrast, measuring functional improvement in patients could be another useful standard by which to evaluate the efficacy of a treatment. Also of interest to the panel was how researchers set benchmarks that signify a treatment is indeed empirically supported. Many of the speakers said that a treatment should be compared with other currently available options, suggesting that when trying to answer the question of whether a course should be used, a good standard is whether the treatment is a good option considering all the other options available.
Speakers went on to address multicomponent treatments, especially those situations where some treatment components work and others do not. The panel was not certain whether these types of “treatment packages” should be considered as a whole or whether their individual treatment components should be evaluated separately. The panel suggested that individual assessment of components may be useful if the goal is to advance science or develop a new treatment, but may not be useful if the goal is to develop treatment guidelines.
This type of uncertainty may sum up the current state of empirically supported psychological treatments. The field is at a crossroads: Clinicians know that they must move forward, but they are not yet sure in what direction to move. A consensus on the direction clinicians take in setting the standard for — and evaluating — ESTs will require the input of researchers and practitioners alike. This symposium — and the discussion it generated — is a small step on the road to a revisualization of empirical standards for clinical science.