Persistence and Fade-Out of Educational Intervention Effects: Mechanisms and Potential Solutions

Psychological Science in the Public Interest (Volume 21, Number 2)
Read the Full Text (PDF, HTML)

Time-limited experiences such as educational interventions may have long-lasting effects and alter a person’s life trajectory, but in some instances, their effects are short-lived. Understanding the factors that influence and contribute to the persistence and fade-out of interventions can improve theories of human development and help to create meaningful interventions, with implications for practice and policy.

In this issue of Psychological Science in the Public Interest (Volume 21, Issue 2), Drew H. Bailey, Greg J. Duncan, Flávio Cunha, Barbara R. Foorman, and David S. Yeager review the evidence for persistence and fade-out of interventions, with a focus on educational interventions. They conclude that, despite limited data on the long-term effects of interventions, fade-out appears to be widespread, and persistence appears to depend on several factors. The authors present a model of skill building that might be used to make predictions about persistence and fade-out.

Persistence and Fade-out: Definition and Prevalence

Intervention effects are usually defined as differences in outcomes among a group of people who received an intervention relative to a group of people who did not. Bailey and colleagues explain that fade-out refers to “a variety of associated but distinct phenomena related to the time course of effects after the completion of an intervention.” Persistence refers to the durability of intervention effects, in the sense of whether outcomes still differ for those who received the intervention long after it has ended. Here, the researchers focus on fade-out, in opposition to persistence, as a pattern of diminishing impacts after the end of the intervention. They note that fade-out can coexist with persistence; for example, students learning a new language will experience some forgetfulness and setbacks, but they are still expected to perform better (showing some persistence) if they take a class than if they do not.

Bailey and colleagues note a pattern of declining intervention effects in many interventions designed to boost children’s academic performance by targeting specific skills. They do not find any meta-analyses supporting the existence of full persistence (i.e., interventions whose benefits did not fade out at all). However, some interventions that have targeted a broader set of skills or capacities appear to have generated substantial benefits with some degree of persistence. Interestingly, some of the best-known instances of the persistence of educational interventions showed fade-out followed by the emergence of long-term benefits.

Explanations for Fade-out and Persistence

Fade-out does not appear to be an artifact of measurement or publication bias. It does not appear to be caused by misleading effect-size reporting or baseline or posttreatment imbalances (i.e., “unhappy” randomization in randomized controlled trials that make groups’ outcomes diverge or converge due to variables other than the intervention). Fade-out also does not appear to be caused only by a lack of correspondence between constructs influenced by or measured long after the end of the intervention and constructs measured immediately after the intervention. Some explanations of fade-out rely on psychological processes that occur after the interventions, such as forgetting or failing to transfer learning. In other words, the benefits of learning one skill may not transfer to other broadly related skills or tasks.

Several factors appear to sustain the benefits of interventions, leading to persistence, Bailey and colleagues propose. These include the targeting of malleable, fundamental skills that would not have developed in the absence of the intervention. If an intervention successfully builds these skills, sustaining environments are required to maintain those skill advantages. Thus, persistence also depends on institutional constraints and opportunities within the social context, along with the overlap between interventions and what the environment offers to the individuals.

A Model of Skill Building and Its Implications

Building on the factors that promote the persistence of educational interventions, Bailey and colleagues propose a formal model of skill building. This model predicts that interventions will have long-term effects if they target certain skills, including advanced academic and vocational skills and social-cognitive factors, such as children’s implicit theories of intelligence. Other targets might be nutrition, toxic stress, parenting, and basic problem-solving skills. The model also predicts that intensive interventions that affect an individual’s contexts may be most likely to provide persistent benefits. However, these interventions are usually very expensive. Accordingly, identifying sustaining environments would be quite useful for policymaking because it would allow for a combination of intensive targeted interventions and continued investment in environment quality that would improve the persistence of the interventions’ benefits.

Bailey and colleagues recommend longer follow-up periods for research on the long-term consequences of interventions. Their review shows that only a few intervention studies have assessed impacts after interventions ended, preventing researchers from getting a clear view of fade-out and persistence patterns. The authors also suggest testing for overalignment between intervention goals and measurements (similar to the “teaching to the test” effect) and then reducing such overalignment so that interventions address gaps in the environments that foster fundamental skills rather than gaps in the skills that scales measure and develop. Other research recommendations include testing the interactions between interventions and context and developing models to forecast the long-term impacts of interventions. Regarding policy, Bailey and colleagues recommend that policymakers conduct cost-benefit analyses instead of relying on standardized effect sizes, which provides incomplete guidance.

Interventions to Do Real-World Good: Generalization and Persistence

By C. Shawn Green, University of Wisconsin-Madison

Read the Full Text (PDF, HTML)

Interventions to Do Real-World Good

In an accompanying commentary, C. Shawn Green (Department of Psychology, University of Wisconsin-Madison) discusses two criteria that interventions must meet to “do real-world good.” First, an intervention’s impact should generalize reasonably broadly and be enduring (i.e., persist). Green argues that despite the apparent ease of creating interventions that improve learning of specific skills, these interventions do not produce real-world good because their effects do not generalize. The second criterion is long-term follow-up, as Bailey and colleagues also propose. Green suggests that (a) thinking in terms of trajectories after an intervention can inform designs for persistence, and (b) it is important to leverage methodology to better understand the mechanisms of fade-out.

What We Are Learning About Fade-Out of Intervention Effects: A Commentary

By Barbara Schneider and Lydia Bradford, Michigan State University

Read the Full Text (PDF, HTML)

What Are We Learning About Fade-out Intervention Effects?

In another commentary, Barbara Schneider (College of Education and Department of Sociology, Michigan State University) and Lydia Bradford (Department of Counseling, Educational Psychology, and Special Education, Michigan State University) argue that the psychological perspective on education interventions might lead researchers to overlook study designs in which fade-out can more easily be adjusted (e.g., quasiexperimental designs with longitudinal samples) or other naturally occurring treatments (e.g., the use of online instruction during a pandemic). The authors specify the problems that can arise from designing interventions that rely on randomized controlled trials, including postintervention measurement issues. They suggest that, going beyond Bailey and colleagues’ recommendations, researchers should also identify and measure mediators that can help to better identify the causes of interventions’ effects.

About the Authors (PDF, HTML)

See related news release.


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.