When the fateful time for analysis arrives, we frequently turn to t-tests, ANOVAs, or Pearson product-moment correlations. These parametric statistics are ubiquitous in the behavioral sciences. Alternatively, the nonparametric equivalents of our favorite analyses, such as the Mann-Whitney test, Kruskal-Wallis test, and Spearman correlation are not applied as commonly (Cohen, 2008). This is unfortunate because nonparametric statistics provide practical and statistical advantages over parametric approaches for many variables in psychological science research.
What Are Nonparametric Statistics?
Nonparametric statistical analyses are used to investigate research questions in which the dependent variable is ranked or categorical rather than quantified in a true numeric sense. Traditional parametric statistics require a number of assumptions about the characteristics (i.e., parameters) of the data. Nonparametric statistics do not require the same assumptions, which makes nonparametric statistics more flexible and, in some ways, more appropriate for broad applications. Common psychological science variables are often non-normally distributed and non-numerical, ranked responses (e.g., “somewhat true” versus “very true”), so the relaxed requirements of nonparametric statistics make them an important alternative to parametric methods. Also, employing nonparametric statistics actually has many advantages.
Nonparametric Statistics Aren’t Bound By Pesky Assumptions
Among the primary assumptions of parametric statistics is the assumption that the data is normally distributed (for those who might need a refresher on their statistics terms, see www.statsoft.com/textbook/). Many researchers don’t explicitly check the assumptions of parametric tests. Also, most tests used to check assumptions (1) don’t have adequate power to identify deviations from normality or homogeneity of variance (Jaccard & Guilamo-Ramos, 2002) and (2) require normality or homogeneity of variance to interpret violations of normality and homogeneity of variance (Erceg-Hurn & Mirosevich, 2008). Issues such as these have led some statisticians to recommend that these assumption-heavy tests not be used due to the high error rate (e.g., Glass & Hopkins, 1996). When these tests are used, researchers should be cautious when interpreting their results.
Issues surrounding normality assumptions alone are quite complex. Some common psychological science variables, such as reaction times, tend to be positively skewed and are non-normal (Heiman, 2006). There are even complications for variables that are more likely to be normal. For instance, in small samples, there is no way to be sure that the normality assumption has been met (Hill & Lewicki, 2007). Even in large samples, in which a variable’s distribution may seem normal, the true population distribution may not necessarily be normal. While many investigators believe that parametric statistics protect against violations of the assumptions, research demonstrates that this is true in only a narrow number of situations (Erceg-Hurn & Mirosevich, 2008).
While parametric statistics require strict assumptions about underlying variable distributions, nonparametric statistics are not confined by assumptions (Siegel, 1957). The main assumptions of nonparametric tests are that the dependent variable should be continuous and have independent random sampling, which means that nonparametric statistics do not require assumptions of homogeneity of variance and normality.
Nonparametric Statistics, Transformations, And Power
When parametric statistics are appropriate to use, they have greater power than nonparametric statistics. However, researchers using parametric statistics frequently apply data transformations (e.g., logarithmic transformations) to try to make a skewed distribution approach normality. While these transformations can make variables more normally distributed, they can also diminish or alter experimental effects, which can reduce power. And even though parametric tests can withstand some deviation from their inherent assumptions, there is no consensus on what degree of violation is acceptable. When the data violates the assumptions of a parametric test, nonparametric tests are again the more powerful analytic technique (Siegel, 1957). Finally, nonparametric statistics can often attain the same level of power as parametric tests (if their assumptions are actually met) by modest increases in sample size. Generally, only a slightly increased sample size is needed for nonparametric statistics to have comparable power to parametric statistics (Siegel, 1957).
Nonparametric Statistics Can Be Used For Ordinal-level Data
The level of measurement of a variable (nominal, ordinal, interval, or ratio) determines which statistical procedures are appropriate for analysis. In behavioral sciences, variables of interest are generally ordinal in nature. This fact is problematic because parametric statistics require variables to reach at least the interval level (Siegel, 1957). In contrast, nonparametric statistics can be used to analyze data at all levels of measurement. For example, Likert scales, which are a favorite tool in psychological research, are regularly analyzed as interval-level data with parametric tests. Likert scales are not interval-level data; they are ordinal scales because the participant ranks a response to an item with unequal intervals between the values. For instance, it is unclear if the differences between item responses of “somewhat false” and “somewhat true” and between “somewhat true” and “very true” on a four-point response scale are the same. The frequent application of parametric analyses to ordinal data such as Likert scales is so pervasive that it has been referred to as the first of “the seven deadly sins of statistical analyses” (Kuzon, Urbanchek, & McCabe, 1996).
A Variety Of Nonparametric Tests Can Be Used
There are many different types of nonparametric tests that can be used to analyze data. For two independent samples, a Mann-Whitney U-test or the rank-sum test can be applied. When two samples are matched or a participant is assessed twice, the Wilcoxon signed-ranks test can be performed. With three or more groups, the Kruskal-Wallis test is appropriate for independent samples, while Friedman’s test is appropriate for repeated measures or randomized blocks. To conduct post-hoc analyses for either of these nonparametric ANOVAs (Kruskal-Wallis test and Friedman’s test), the Mann Whitney rank-sum test and Wilcoxon signed-ranks test can be used, respectively. The correlation test for ordinal data is the Spearman rank-order correlation and, for nominal data, it is the contingency coefficient (Cohen, 2008; Siegel, 1957). These tests are the nonparametric counterparts for the independent samples t-test, matched t-test, one-way independent ANOVA, repeated measures ANOVA, protected t-test for post-hoc analyses, and the Pearson product-moment correlation, respectively.
When Should Nonparametric Statistics Be Used?
After conducting a study, the question that ultimately arises is when should you use a parametric or nonparametric analysis? Keep in mind the following questions when trying to decide which analysis is appropriate for your data: Are your variables normally distributed? Is there homogeneity of variance? Are the response items to your survey actually the same distance apart? If you answered no to any of these questions, then your data would be best analyzed using a nonparametric technique. Also, if you answer yes to the following questions, a nonparametric test should be used: Are participants assessing stimuli or manipulations using a Likert scale? Do the response options range from strongly disagree to strongly agree?
In summary, there are many benefits to gain by more widespread application of nonparametric statistics in psychological science. Nonparametric statistics are useful when a violation in the assumptions of parametric tests occurred, when transformations may be needed, and when variables are at the ordinal level or below. Broader use of these nonparametric tools can help ensure proper data analysis.
References and Further Reading:
Cohen, B.H. (2008). Explaining psychological statistics (3rd ed.). Hoboken, New Jersey: John Wiley & Sons, Inc.
Conover, W.J., & Iman, R.L. (1981). Rank transformations as a bridge between parametric and nonparametric
statistics. The American Statistician, 35, 124129.
Erceg-Hurn, D.M., & Mirosevich, V.M. (2008). Modern robust statistical methods: An easy way to maximize the
accuracy and power of your research. American Psychologist, 63, 591-601.
Glass, G. V., & Hopkins, K. D. (1996). Statistical methods in education and psychology (3rd ed.). Boston, MA:
Allyn & Bacon.
Heiman, G.W. (2006). Basic statistics for the behavioral sciences (5th ed.). Boston, MA: Houghton Mifflin
Hill, T. & Lewicki, P. (2007). Statistics methods and applications. Tulsa, OK: StatSoft.
Jaccard, J., & Guilamo-Ramos, V. (2002). Analysis of variance frameworks in clinical child and adolescent
psychology: Advanced issues and recommendations. Journal of Clinical Child Psychology, 31, 278–294.
Kuzon, W.M., Jr., Urbanchek, M.G., & McCabe, S. (1996). The seven deadly sins of statistical analyses. Annals of
Plastic Surgery, 37, 265-272.
Siegel, S. (1957). Nonparametric statistics. The American Statistician, 11, 13-19.
Leave a comment below and continue the conversation.