Placebo Effect Largely Ignored in Psychological Intervention Studies

Many brain-training companies tout the scientific backing of their products — the laboratory studies that reveal how their programs improve your brainpower. But according to a new report, most intervention studies like these have a critical flaw: They do not adequately account for the placebo effect.

The new analysis appears in the journal Perspectives on Psychological Science, a journal of the Association for Psychological Science.

The results of psychological interventions, like medical ones, must be compared to improvements in a control condition, says University of Illinois psychology professor Daniel Simons, who co-wrote the article with Walter Boot, Cary Stothart and Cassie Stutts, of Florida State University. In a clinical trial for a new drug, some participants receive a pill with the critical ingredients, and others receive an identical-looking pill that is inert — a placebo. Because participants cannot tell which they received, people in each condition should be equally likely to expect improvements.

In contrast, for most psychology interventions, participants know what’s in their “pill,” Simons explains.

“It’s not possible to use a brain-training program for 10 hours without knowing the type of training you received,” he says. “People can form expectations for what will improve based on their experiences with the training tasks, and the existence of differences in expectations between people in treatment and control groups potentially undermines any claim that improvements were due to the treatment itself. Not one of the studies cited by the brain-training companies looks at differing expectations between the groups.”

Merely having an “active control group,” one that does something for the same amount of time as the treatment group, does not protect against the placebo effect, Simons says. A treatment group that completes an intensive memory-training regimen might expect improved performance on other cognitive tasks assessing memory. A control group that does crossword puzzles or watches DVDs for the same amount of time likely won’t expect the same amount of improvement on the same tasks, he explains.

“These problems are not limited to brain-training studies,” Simons says. “They hold true for almost all intervention studies.”

To illustrate the pervasiveness of this problem, the researchers examined expectations for improvement in studies of the effect of playing action video games on measures of perception and attention.

“Such studies find greater improvements in performance on attention and perception tasks after training with action video games than after training with non-action games for the same amount of time,” Boot explains. “However, even with this sort of active control condition, these interventions still are at risk for differential placebo effects.”

The researchers measured expectations in two survey studies involving 200 participants each. Participants watched either a short video of an action game (“Unreal Tournament”) or one of the games commonly used as controls in these studies (“Tetris” or “The Sims”). They then read descriptions of the cognitive tests used in the studies, watched short videos of the tests, and answered questions about whether they thought their performance on the tests would improve as a result of training on the video game they had viewed.

The results showed that expectations for improvement were greater for the action-game group than for the control games on exactly the same tests that showed bigger improvements for action-game training in the intervention studies. In fact, the pattern of expected improvements exactly matched actual improvements seen in video game intervention studies, the researchers found.

“If expectations for improvement align perfectly with the actual improvements, then any claim that the treatment was effective is premature,” Simons says. “Researchers must first eliminate differences in expectations across conditions.”

“Even though participants in psychology interventions typically know the nature of their intervention — you can’t play a video game without knowing the game you’re playing — there are steps researchers can take to ensure that the advantages of the treatment group are not due to expectations,” Boot explains.

For example, researchers can mislead participants as to the expected benefits of a particular intervention, giving those in the control group higher expectations for improvement than those in the treatment group.

Researchers also can assess expectations generated by treatments in a separate sample of participants to ensure that expectations do not differ between intervention and control treatments.

“Although placebo effects can be helpful as well, we need to know what causes improvements in an intervention,” Simons says. “We don’t want to recommend new therapies, change school curricula, or encourage the elderly to buy brain-training games if the benefits are just due to expectations for improvement. Only by using better active controls that equate for expectations can we draw definitive conclusions about the effectiveness of any intervention.”


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.