Presidential Column

What’s Fair to Compare?

Who is Number 1? We are fascinated with this question. What is the best pizza, the best movie, the best TV show? In sports, winning and losing is part of the game and in most sports, a Number 1 team is usually unmistakable. In universities or psychology departments, the answer is not so clear, but still, we have endless single score rankings that set up a sports-like winner-loser competition. Many of these rankings are spurious, based on nothing but personal opinion, or on samples of opinion so small as to be ludicrous. Others claim to be based on data; most notable among these are U.S. News and World Report and the National Research Council’s rankings of graduate programs. But even these rankings have significant problems. U.S. News changes their criteria and weighting every year, the cynical say, so that the rankings change and they sell magazines. The National Research Council includes only some programs and does their ranking infrequently.

Unlike the multi-factor journal rankings on Page 1 of this issue, which do make sense, single-score ranking is the wrong idea for universities and psychology departments. There are many dimensions to quality, and no single number reflects everything universities or departments can do. An alternative idea is to compare universities on several dimensions. At the University of Florida, we tried this method using online measures.

Recognizing that research is a critical dimension of quality, one indicator we chose was research volume, as reflected in total research and development and federal research dollars as reported by National Science Foundation as a measure of research. As indicators of faculty quality, we used the number of arts and humanities awards (Guggenheims, Fulbrights, etc.) and the number of National Academy of Science members. To indicate quality of the undergraduate student body, we measured the number of National Merit Scholars. We chose the number of PhDs awarded and number of postdoctoral students to indicate strength in post-baccalaureate education. Size of endowment and volume of annual giving reflect private support.

On these basic dimensions, while more than 50 public universities rank in the top 25 on one of these measures, only four rank in the top 25 on all measures (Berkeley, University of North Carolina-Chapel Hill, University of Washington and UCLA). Nine rank in the top 25 on eight of the nine measures (Michigan, Wisconsin, Texas A&M, Minnesota, Ohio State, lllinois, Arizona, University of Texas-Austin, and Florida). Another five rank in the top 25 on seven of the measures, five more reach the top 25 on six of the measures and three reach the top 25 on five of the measures, completing the group of the top 25 public universities (see http://thecenter. ufl.edu/ for the groupings).

For private universities, 13 rank in the top 25 on all measures (Johns Hopkins, MIT, Stanford, Harvard, University of Pennsylvania, Washington University, Duke, USC, Yale, Northwestern, NYU, and Chicago). Five more rank in the top 25 on eight measures, one ranks in the top 25 on seven measures, two on six measures, and three on four measures, completing a grouping of the top universities. This method produces groupings of universities rather than a single ranking and gives a method of analyzing strengths of a university. For example, in the second group of public institutions, Michigan is Number 1 in total research; it is seventh in faculty national academy members; and 26’h in National Merit and National Achievement Scholars. Even top universities do not have the top rankings on all measures, because universities differ in terms of their academic profiles and their focus.

If we were to apply a similar analysis to psychology departments, what measures could we use? Some ideas of measures on which data are available include: quality of undergraduate major as shown by SAT scores and high school grade point average (GPA); quality of graduate students, as reflected in undergraduate GPA, GRE scores; sponsored research expenditures; and citations of journal articles. If we collected these and other data, psychology departments would have an idea of how they compare to other psychology departments, and would be able to measure their improvement over time. Moreover, psychologists would produce the data and the groupings of departments, not outside agencies. We could also use the same method to compare programs within psychology departments. That would be more useful within the field for graduate students and faculty, but less useful for university administration, undergraduates, and the public.


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.