Assessing institutional and individual scientific research and educational productivity is a daunting task, one that recently consumed an issue of US News and World Report and 750 pages of tables and text compressed into a five-pound mammoth report by the National Research Council (NRC). (See story on page 1 on the NRC’s Research-Doctorate Programs in the United States: Continuity and Change) . One component of the NRC’s massive survey of doctoral training programs was the publishing-related productivity and influence of faculty. But, the present article takes a closer more detailed look at such publication and bibliographic citation data to produce a separate set of institutional rankings, based on the 25 most-cited psychology publications during the period 1990 through 1994. Decide for yourself what data you want to rely on in your own assessment of departments of psychology, or use both sets of analyses to guide your evaluation or to confirm your instincts about individual departments!
The article below analyzes the field of psychology from 1990-1994, as it was represented by articles in scholarly psychology journals indexed by the Institute for Scientific Information (ISI) during that five-year period. Publication and citation measures are used to discriminate top performers in terms of papers, institutions, and individual researchers. These measures are but one of many that can be used to rate institutional and individual productivity. (See the September 1995 Observer for a less extensive ranking that was based only on the 100 top-cited papers rather than all psychology papers published each year during 1981-94.) – Editor
Several years ago, the Observer published a series of three brief articles titled “A Citationist Perspective on Psychology” by Eugene Garfield, founder and chairman emeritus of ISI. In that series, Garfield highlighted the most-cited papers, institutions, and authors represented in psychology journals published from 1986-1990 (see November 1992 Observer). The present article updates Garfield’ s study and covers psychology papers indexed by lSI from 1990-1994 and cited through the end of 1994.
What do we mean by “psychology”? In this case, as with the 1992 study, a paper was taken to be a psychology paper if it appeared in one of some 300 journals listed in the psychology subsection of Current Contents/Social & Behavioral Sciences. These titles represent research in all fields of psychology, including applied, clinical, developmental, educational, experimental, and social, among others.
A total of 57,561 psychology papers were surveyed. Only articles, reviews, notes, and proceedings papers were examined; editorials, letters to the editor, meeting abstracts, and other miscellanies were excluded, as was the case in the 1992 study.
Very few items in any population of articles are highly cited. They are, statistically speaking, rare events. Of the 57,561 papers examined in this study, only 15 were cited 100 times or more, only 114 were cited 50 times or more, and only 740 were cited 25 times or more. (Naturally, papers published early in the five-year period 1990-1994 had more time to accrue citations than those published near the end of the period.) In all, the 57,561 papers were cited a total of 159,193 times, for a per paper citation rate of 2.76. This compares with the 1986-1990 citations-per-paper rate for psychology papers of 1.89.
Listed in Table I are the 25 psychology papers, published from 1990-1994, that received 90 or more citations by the end of 1994. They are listed in descending order by number of citations. Observer readers likely will reach their own conclusions about the substance and content of these papers. But here, for each of these 25 highly cited reports, we provide the number of citations, bibliographic reference, and institutional affiliation of the author(s).
A total of 12 journals and 30 institutions are represented in these 25 papers. Only two author names (C.C. DiClemente and J.O. Prochaska) appear on more than one paper in the group.
Institutional Rank by Impact
By summarizing the publication and citation data for the entire corpus of psychology papers surveyed (57,561 papers during 1990-1994), the rankings of institutions shown in Table 2 were obtained. The table lists institutions by rank in terms of the number of psychology papers published by authors affiliated with the specified institution.
The ranking of institutions by output (number of papers) and the ranking by influence (number of citations) produce, for the most part, the same set of players, since the measure of total citations is roughly dependent on the quantity of publications being cited. That is, the more papers an institution turns out, the greater the chance its papers have of being cited. Still, a few names appear in the total-citations ranking that fail to appear in the output ranking, such as New York Univ., Carnegie Mellon Univ., Univ. of Oregon, Univ. of Arizona, and Univ. of Southern California.
These two types of ran kings (output and citations) tend to produce lists of familiar institutions to scientists in a given field , since our perception of “top,” “best,” “leading” are often highly colored by sheer size, whether measured by number of faculty, number of graduate students, or amount of funding received for research. The ranking by impact (citations per paper), on the other hand, often produces surprising results. In Table 4, we see the names of institutions that would never appear at all, according to simpler performance measures based merely on size and quantity. To produce this impact ranking, an arbitrary publication output threshold of 100 papers was used. That is, only institutions having at least 100 papers published during the stated period were included.
Just as it is worth noting which institutions are influential performers independent of mere size (i .e., influence calculated on a per paper basis), it is also worth mentioning those that are both big and that produce high-impact re search papers. The institutions that appear in all three tables are: Univ. of California-Berkeley; Univ. of California-San Diego; Stanford Univ.; Univ. of Pittsburgh; Univ. of Michigan; and, Univ. of Washington.
Among the lean and mean, Carnegie Mellon Univ. is the recent star performer. Others of medium size (200-400 papers published) include: Univ. of Oregon, Univ. of California-San Francisco, Cornell Univ., Univ. of Oxford, Johns Hopkins Univ., Univ. of Miami, Duke Univ., New York Univ., Vanderbilt Univ., Univ. of Waterloo, and McMaster Univ.
Plainly, there is no one measure that call summarize all aspects of performance, so it is generally best to obtain multiple measures and build up a broad profile to round out one’s tabulation of institutional achievement.
Rank Changes Since 1992
The previous study, reported in the November 1992 Observer for the period 1986-90, ranked the top 50 institutions by impact and used a threshold of 100 published papers. Carnegie Mellon Univ. was ranked first in that study, as it is in this one. Other institutions that appeared in the top 25 previously and in this study are: Princeton Univ., Univ. of Oxford, Stanford Univ., Univ. of Pittsburgh, Univ. of Oregon, New York Univ., Univ. of California-Berkeley, Vanderbilt Univ., Univ. of Michigan, MIT, Univ. of California-San Diego, and Univ. of Washington.
The newcomers to the top 25 in 1990-1994 are: Univ. of Denver, New York State Psychiatric Institute, Univ. of California-San Francisco, Univ. of Rhode Island, Cornell Univ., Johns Hopkins Univ., Univ. of Miami, Duke Univ., Univ. of Waterloo, National Institute of Mental Health, McMaster Univ., and SUNY-Stony Brook. The institutions that dropped out of the top 25 since the November 1992 report are: Univ. of Vermont, Univ. of Toronto, Univ. of Pennsylvania, Medical Research Council (United Kingdom), Univ. of Illinois-UrbanaChampaign, Univ. of Chicago, Temple Univ., Northwestern bniv., Univ. of Rochester, Harvard Univ., Univ. of California-Los Angeles, and Indiana Univ.-Bloomington.
Author Output, Influence, Impact
The table below ranks the authors of the total 57,561 recently published psychology papers by output, influence, and impact. The data are based on all-author tabulations, which means each author listed on a paper received full credit for each paper and its subsequent citations. For the ranking by impact, only authors who published at least 10 papers during the period are listed.
Quantitative measures, such as publication and citation counts, can often provide a unique bird’s eye perspective on a given field of research over a specific period. Such a view is not dependent on the observer’s past experience or social network, nor is it colored by research conducted years before the period in question and which can remain vivid in memory. The collection of such measures is meant simply to enrich and better inform the work of those who must, ultimately, make subjective judgments about issues such as research strategy, allocating research funding, or where to attend graduate school. And, as with any measure in science, publication and citation measures have specific uses and should be interpreted by people who understand both the bases on which they are determined and their limitations.
Leave a comment below and continue the conversation.