Observation

Statistics Organization Speaks Out on P-Values

This is a photo of a hand pointing to a graph.As psychological scientists continue efforts to improve statistical and methodological practices, they can turn to a new resource for guidance. The American Statistical Association (ASA) has released a new statement on the use of p-values in science. The statement suggests researchers should be wary of statistical claims based on p-values alone.

According to the ASA, there is an over-reliance on the p-value in scientific reasoning. Many students of psychological science are taught that obtaining a significant result, p < .05, is a “golden ticket” to publication. Likewise, the scientific community too-frequently rewards studies with significant p-values without considering the validity of other aspects of those studies. These common attitudes may be partly to blame for issues of replicability in science.

“We hoped that a statement from the world’s largest professional association of statisticians would open a fresh discussion and draw renewed and vigorous attention to changing the practice of science with regards to the use of statistical inference” said ASA Executive Director Ron Wasserstein, who organized the statement.

The statement — the first position on statistical practice ever taken by the association — provides a list of six principles that producers and consumers of scientific research should consider when evaluating p-values. These six principles are:

  1. P-values can indicate how incompatible the data are with a specified statistical model.
  2. P-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone.
  3. Scientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold.
  4. Proper inference requires full reporting and transparency.
  5. A p-value, or statistical significance, does not measure the size of an effect or the importance of a result.
  6. By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis.

Taken together, these principles mean that statistical support for a theory or model requires evidence beyond a single metric like the p-value. It follows that whether a research study succeeded or failed does not necessarily depend on a single p-value — and likewise, a journal editor’s decision to accept or reject a manuscript may not be based solely on the p-value reported. The ASA statement suggests that scientists, funders, journalists, and others should evaluate the persuasiveness of the statistical argument as a whole.

“The p-value was never intended to be a substitute for scientific reasoning. Well-reasoned statistical arguments contain much more than the value of a single number and whether that number exceeds an arbitrary threshold. The ASA statement is intended to steer research into a ‘post p < 0.05 era,’” said Wasserstein.

The ASA statement coincides with the announcement of APS’s new committee of statistical advisors, which is helping to assist Psychological Science editors evaluate statistics and methods in journal submissions.

Interested readers can view the complete ASA statement, and read a short paragraph about what each of these six principles means, by reading the full article published in the ASA journal The American Statistician.

 


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.