Understand Uncertainty in Program Effects

Education Week:

In education research, there’s a drive to cut to the chase: What’s the effect on the classroom? How much better will students perform on the state math test using this curriculum? How many months of classroom time can students progress by using that tutoring system? Usually education watchers make that interpretation based on a study’s effect size, often called the p-value. Yet at the annual conference of the Association for Psychological Science here this weekend, statistics professor Geoff Cumming of Latrobe University in Melbourne, Australia made a thoughtful—and pretty persuasive—argument that the effect size is not the certainty it is often taken for, and considering more complexity can give us a more accurate view of how interventions really work.

A p-value in statistics represents the likelihood that the researcher would be able to get the same result by chance, and generally the smaller it is, the better: a p-value of .05 or less is usually needed for the intervention to be considered to have at least a small effect. As Cumming explains in this demonstration, the effects of a given experiment can vary significantly with every repetition, and the “official” effect size doesn’t usually show that uncertainty.

Read the whole story: Education Week

Leave a comment below and continue the conversation.

Comments

Leave a comment.

Comments go live after a short delay. Thank you for contributing.

(required)

(required)