Members in the Media
From: Education Week

Understand Uncertainty in Program Effects

Education Week:

In education research, there’s a drive to cut to the chase: What’s the effect on the classroom? How much better will students perform on the state math test using this curriculum? How many months of classroom time can students progress by using that tutoring system? Usually education watchers make that interpretation based on a study’s effect size, often called the p-value. Yet at the annual conference of the Association for Psychological Science here this weekend, statistics professor Geoff Cumming of Latrobe University in Melbourne, Australia made a thoughtful—and pretty persuasive—argument that the effect size is not the certainty it is often taken for, and considering more complexity can give us a more accurate view of how interventions really work.

A p-value in statistics represents the likelihood that the researcher would be able to get the same result by chance, and generally the smaller it is, the better: a p-value of .05 or less is usually needed for the intervention to be considered to have at least a small effect. As Cumming explains in this demonstration, the effects of a given experiment can vary significantly with every repetition, and the “official” effect size doesn’t usually show that uncertainty.

Read the whole story: Education Week

More of our Members in the Media >


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.