PSPI Reports: Effective Study Techniques, Power of Misinformation

Elizabeth J. Marsh, Duke University, shares her team’s research showing that some widely used study techniques are ineffective.

While effective learning strategies are integral to improving student outcomes, many students’ favored learning techniques flunk the test. That was the verdict from Elizabeth J. Marsh of Duke University, as she presented her research team’s findings at the fifth annual Psychological Science in the Public Interest (PSPI) Symposium at this year’s Annual Convention. The event was hosted by James McKeen Cattell Fellow Award recipient and PSPI editor Elaine F. Walker.

Another recent PSPI author presenting at the symposium was Stephan Lewandowsky of the University of Western Australia and Bristol University in the United Kingdom. He delved into the issue of misinformation and people’s response to it.

Marsh was part of the team that authored the high-profile report released earlier this year, “Improving Students’ Learning With Effective Learning Techniques: Promising Directions From Cognitive and Educational Psychology.” She described the questions her team used to evaluate 10 learning techniques and discussed some of the evidence behind two they recommend and one they found not to be as useful as many people think it is. Students aren’t the only ones using questionable techniques. Of the practices educators are taught, most require training, are limited to certain types of materials, or simply are not backed up by much evidence. Only two of them stood up to the team’s scrutiny: practice-testing and distributed practice.

As Marsh elaborated on the evidence behind the use of practice-testing, she shared work from her own lab looking at multiple-choice tests, which have the drawback of exposing students to the wrong answers (more on the effect of that kind of misinformation below). After taking an initial multiple-choice test, half of the participating second-graders received feedback on their answers and half did not. On a later test that included some of the previously tested questions and some new questions, students gave more wrong answers, except when they had received feedback after the initial test. The point, Marsh noted, is that when thinking about learning strategies, there are no simple answers. We have to think about when and how to use them.

Even highlighting — one of those debunked student favorites — merits a more thoughtful approach than outright dismissal. Although the bulk of the literature shows no benefit, it is awfully hard to convince students to put down their neon markers. Lead author John Dunlosky recommends that students use highlighting in combination with other strategies known to be helpful, Marsh noted.

Stephan Lewandowsky, University of Bristol, the United Kingdom, and University of Western Australia, has examined why people continue to believe information that turns out to be false.

As for whether the recommended techniques work in a “noisy environment,” Marsh and her colleagues have recently studied the use of practice-testing and distributed practice in college engineering classes. The improvement shown in the students in the experimental condition speaks to the success of these techniques for learning highly complex material.

These simple, easy, and cheap changes in study techniques have small but real effect sizes, says Marsh, and have a bigger effect on achievement than do some popular targets such as classroom size. We need to think about improving techniques that are already highly used, changing them a little to make them better. And we need to think about students’ and educators’ beliefs when making recommendations. We need to consider who’s implementing the strategies: students, instructors, test publishers, or online learning developers. And as scientists, we need to analyze learning strategies with studies that use complex techniques and have measures that transfer.

Lewandowsky gave an overview of his team’s work on “Misinformation and Its Correction: Continued Influences and Successful Debiasing.” His team looked at misinformation in society, its sources and scope, and whether or not it actually matters. They also examined misinformation at the level of basic cognition: why people believe things, why they continue to believe things that turn out to be false, and how to help people give up beliefs that are not in line with the facts. In his presentation, Lewandowsky discussed prime examples of misinformation that drew considerable buy-in from certain segments of the public, despite scientific evidence to the contrary. These included the widespread belief in a link between vaccinations and autism, the existence of weapons of mass destruction (WMDs) in Iraq, and an ongoing lack of consensus among scientists regarding climate change.

 

Elaine F. Walker, Emory University, editor of Psychological Science in the Public Interest, chaired the PSPI symposium.

“Misinformation is widespread,” said Lewandowsky. “We can sometimes quantify it. It can have a discernible effect, and relatively subtle variables such as mismatching headlines and blog comments can have a measurable effect on people’s comprehension.”

In the most striking example, Lewandowsky presented data from Sweden, where, during the growth of a vocal anti-vaccination movement, uptake of the vaccine for whooping cough decreased dramatically and incidence of the disease increased. At the same time, the incidence decreased in Norway, where there was no such movement. The huge gap between perception and scientific reality, said Lewandowsky, “translates into dead children.”

Lewandowsky concluded that people cling to misinformation even when they remember a correction. And he added that correction can backfire. When people are presented with a “facts vs. myths” flyer about vaccines, for example, they may be more likely to believe the myths even more strongly after some time. Or take a look at beliefs that are split along partisan lines, such as the WMD myth. A threat to one’s worldview may cause one to dig in one’s heels, counterargue, and reject the correction.

So how can misinformation be corrected? Research suggests that debiasing is possible. One must provide a correct alternative to fill the hole left by the debunked information, create skepticism with an alternative explanation, and affirm people’s basic values to lessen the threat to their worldviews. And when trying to correct a false belief, one should state the facts affirmatively rather than underscore the myth.

Comments

I found giving my undergrad statistics students (mostly Psyc Majors) a practice test very effective. I started this approach by giving a practice test for the first test because I knew many of the students had not taken a math test in years, so I wanted them to get over that initial anxiety before the test, not during it. I soon found that the practice test helped them realize what they didn’t know (even though I personally graded their homeworks for every class). Giving a short version, 30 minute test, with easier problems than the real test, and going over the material afterward, helped the students in many ways. They didn’t all get A’s or B’s, about a 1/3 I gave the opportunity to drop, but they all knew it was a straightforward way to letting them know what I expected them to know.


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.