# Can the Weak Link in Psychological Research be Fixed?

My impression of psychological research is that it is conducted by bright, well-trained individuals armed with millions of dollars in research funds and that their work is resulting in massive amounts of data relevant to a wide range of important and interesting issues. There is, however, a component of this process that has become, over the years, the weak link in our goal to understand psychological phenomena: Data analysis.

A half century ago, there was a reasonably small gap between cutting-edge methods for analyzing data and methods used in psychological research. But for a variety of reasons, this gap has widened tremendously, particularly during the last twenty years. To put it simply, all of the hypothesis testing methods taught in a typical introductory statistics course, and routinely used by applied researchers, are obsolete; there are no exceptions. Hundreds of journal articles and several books point this out, and no published paper has given a counter argument as to why we should continue to be satisfied with standard statistical techniques. These standard methods include Student’s T for means, Student’s T for making inferences about Pearson’s correlation, and the ANOVA F, among others.

My comments are not meant as an indictment against all published studies. If groups of participants do not differ in any way, meaning that they have identical distributions (so in particular they have equal means, variances, and skewness), standard methods are just fine. That is, they guard against Type I errors. And when studying associations among a collection of variables, standard methods perform well when there is independence.

It is when groups differ in some manner or variables are dependent, that extremely serious practical problems arise. If the goal is to collect data and not discover true differences or true associations, or to poorly characterize how groups differ and how variables are related, stick with standard statistical techniques.

It is not my intention here to explain why standard methods fail in applied psychological research? Nontechnical explanations can be found in Wilcox (2001), and there are now many modern methods aimed at addressing these concerns (e.g., Wilcox, 1997, in press). Rather, my goal is to point out some roadblocks to achieving change in the hope that more influential individuals might be able to address these problems in an effective manner.

MAIN CONCERNS

The other problem with conventional techniques is that under general circumstances they use the wrong standard error. This can result in poor power as well, even under normality, and some problems persist no matter how large the sample sizes might be (e.g., Cressie and Whitford, 1986). Switching to conventional nonparametric methods does not avoid this problem, but again, there are effective ways of dealing with this issue (e.g., Brunner, Domfoh & Langer, 2002; Cliff, 1997; Wilcox, in press).

If modern technology has so much to offer, why is it that most applied researchers don’t take advantage of it? The remainder of this article outlines my own observations relevant to this question in the hope that raising these issues helps our profession as a whole.

**Commercial software.** Several factors, taken together, make it difficult to change the status quo. The first is popular commercial software, which is both a blessing and a curse. It is a blessing for obvious reasons, but it is a curse because applied research is limited by the reluctance of commercial enterprises to modernize the point-and-click methods they provide. Without easy-to-use software, modern methods are inaccessible to most applied researcher, meaning that antiquated methods are often the only options for many psychologists. Years ago, one of my students asked a representative from a well-known software company why they do not add modern methods. His response: ‘We are aware of the problem and have no plans to correct it.’

The reason for this attitude is unclear, but a guess is that the explosion in the number of modern methods may be too daunting for commercial companies with an eye on the bottom line, so we pay the price in our inability to get the most out of our data. It seems we must address this problem ourselves, meaning we need to take steps to provide relatively easy-to-use software.

**Standard intro course.** The second general problem we face stems from what has become the standard introductory course. There are, of course, basic principles that must be covered, and conventional methods need to be taught, one reason being that they continue to be used. The problem is that the vast majority of introductory books (and some advanced books too) ignore the advances and insights from the last half century. The result is an unmistakable, albeit implied, message that important advances in statistical methods ceased circa 1955. Of course, this is incorrect. Moreover, textbooks reinforce the impression that the methods in commercial software perform well. Under the circumstances, why would any applied researcher bother to check with a statistician or a quantitative psychologist?

I hasten to add that I do not intend to be overly critical of authors of standard textbooks. I am aware that some of these authors are cognizant of the problem we face and why standard statistical methods fail, but there are pressures that hinder change. To elaborate a bit, some years ago I thought it might be possible to improve data analysis by writing an undergraduate introductory book that at least touched on modern insights. I submitted two chapters for review, one of which described why serious practical problems arise when using Student’s T. I then described one of the simpler attempts at correcting these problems and indicated that there are even better techniques but that they are best left for a more advanced course. One referee was very positive and enthusiastic, but another argued vehemently that the book should not be published because it would confuse the instructor. A third referee was less negative but essentially echoed this view.

**‘Anyone can teach stats.’** This brings me to the third barrier to achieving change: There seems to be a common attitude that almost anyone can teach a statistics course. The instructors I know are intelligent and very capable, but it is clear that many of them are too busy with their own area of expertise to keep up with advances in statistics. The reality is that keeping up with advances demands a fair amount of effort and we should not expect most non-quantitative psychologists to address this problem anymore than we would expect a pediatrician to be a neurosurgeon in her spare time.

A few instructors I know are aware of advances in statistics but cannot imagine saying anything about them to students, in the belief that the students wouldn’t understand anyway. But we must face the problem of improving instruction if we want psychology to take advantage of modern technology. I know instructors who deal with this problem instead of avoiding it. They show that it can be done, but bringing about meaningful change seems to be just beyond our reach, at least for the moment.

**Disciplinary attitudes.** Finally, attitudes I’ve encountered from non-psychologists, primarily statisticians, seem rather telling. One of the individuals who reviewed my book published in 2001 was clearly a statistician. His reaction was that a nontechnical book that tries to explain modern insights related to fundamental principles is a waste of time; in essence, applied researchers are generally hopeless. Under this logic, statisticians are willing to use modern methods, but applied researchers will never avail themselves of what these methods have to offer. I refuse to accept this, which is one motivation for this article. Also, it seems to me that the lines of communication have virtually broken down between statistics and psychology, partly because each is absorbed in its own disciplinary enterprise, and partly because each views the other as isolationist.

We spend millions of dollars collecting data. Surely a reasonable policy is to get the most out of what the data might tell us. So can the weak link in psychological research be fixed? I would argue that the answer has to be yes. In fact, we have quantitative journals aimed at bridging the gap, plus editors of some prestigious applied journals are aware of this issue and have an interest in doing something about it. So there is hope that at least some subsets of psychological research are moving ahead. But it seems that more needs to be done. Specifically, we need to develop comprehensive strategies focused on eliminating the existing technical and “cultural” barriers that hinder the adoption of modern statistical techniques in applied research in psychology.

REFERENCES

Brunner, E, Domhof, S & Langer, F. (2002). Nonparametric Analysis of Longitudinal Data in Factorial Experiments. New York: Wiley.

Cliff, N. (1996). Ordinal Methods for Behavioral Data Analysis. Mahwah, NJ: Erlbaum. Cressie, NAC, & Whitford, HJ (1986). How to use the two sample t-test. Biometrical Journal, 28, 131-148.

Tukey, JW. (1960). A survey of sampling from contaminated normal distributions. In I. Olkin et al. (eds.) Contributions to Probability and Statistics. Stanford, CA: Stanford University Press.

Westfall, PH, & Young, SS. (1993). Resampling Based Multiple Testing. New York: Wiley. Wilcox, RR. (1997). Introduction to Robust Estimation and Hypothesis Testing. San Diego, CA: Academic Press.

Wilcox, RR. (2001). Fundamentals of Modern Statistical Methods: Substantially Increasing Power and Accuracy. New York: Springer.

Wilcox, R. R. (in press). Applying Conventional Statistical Techniques. San Diego, CA: Academic Press.

*Observer*Vol.15, No.4 April, 2002*
Leave a comment below and continue the conversation.
*

## Comments

## Leave a comment.

Comments go live after a short delay. Thank you for contributing.