‘Highly Cited,’ Highly Controversial

The “Highly Cited” list of psychologists that the Institute for Scientific Information posted on the Internet has unleashed a squall of controversy about the list’s relevance and, more broadly, the use of citations as an indicator of influence in the field.

Following the summer 2003 Internet publication of the list of 242 psychology and psychiatry researchers who had, according to the ISI databank, toted up the most citations from 1981 to 1999, messages began flashing across cyberspace like lightning.

APS Charter Member James Coyne, University of Pennsylvania, called attention to the list by posting a listserv message to the 525 subscribers of the Society for a Science of Clinical Psychology, or SSCP, forecasting the stormy reception. “Undoubtedly,” Coyne said, “this list is going to set off the same controversy, outrage, and consternation – and maybe a little reflection and critical thought – that the lists for other disciplines have generated.”

He outlined the contours of that controversy: “Where are the social and personality psychologists? On whose coattails did X get there? Where is Y? How did those psychiatrists who went brain-dead after their fellowships get there? How many psychiatrists can an invalid biological test drag onto the list?”

ISI has been indexing citations in scholarly articles for more than 40 years. It is now using that database to compile unranked lists of the researchers who were most frequently cited in 21 different scientific fields during the 1980s and 90s. ISI simply counted the number of times each researcher’s name appeared as an author of any cited article, then listed the top 1.5 percent of those names alphabetically. It promises to update the lists periodically, and is already working on a 1983-2002 update.

On its Web site, ISI calls the list a “gateway to the most highly influential scientists and scholars worldwide” and “an invaluable tool to identify individuals, departments and laboratories that have made fundamental contributions to the advancement of science and technology in recent decades.”

Therein lies the root of controversy: do numbers equate with “influence” and “fundamental contributions?”

It doesn’t matter whether an author was the principle investigator or the tenth of 14 listed, he or she was credited with a citation. That suggested a strategy to Michael Miller, University of Minnesota (not on ISI’s list): “If you want to get on the list, form a paper-writing club with some other big-name people. Say there are 10 of you. Each will write 10 papers per year and every paper will have 10 authors (or more). To keep it legal, you will read/edit/comment on the papers written by other members of the club. Suddenly, you’ll publish 100 papers a year and in 19 years, 1,900 papers. Oh, almost forgot: Remember to cite your earlier work!”

BUILT-IN BIAS
A cross-section of psychologists was asked to comment on the list, and more than one pointed out that the list has built-in bias. Coyne noted that some researchers “are disadvantaged by their areas of expertise, lacking a pool of like-minded peers to cite them, even if their work has a large impact in their area.” Further, “some journals are excluded because of insufficient demonstration of quality – some new journals must wait for their credibility to be established. Good stuff can be missed.” Coyne knows firsthand: even though he did make the ISI list, his two most frequently cited articles were missed because they appeared in early issues of Health Psychology and the Journal of Behavioral Medicine.

“Systematic error is almost always more worrisome than noise,” said Scott Lilienfeld, Emory University (not on the list). One of the list’s most obvious biases, he said, is that it is “heavily skewed toward psychiatry and neuroscience journals.”

He identified three likely reasons. First, “So many of these journals have appeared in recent years, and the authors keep citing other papers published in the same journals.” Second, based on his own observation, psychiatrists tend to read only psychiatry journals while psychologists often read, and cite, both psychiatry and psychology journals. And third, it is “far more normative” in psychiatry and medicine than in psychology to credit every lab-member as an author on every paper, even those who contributed little.

Social psychologist Richard Petty, Ohio State University, who is on the ISI list, said he has seen little evidence in his field of such “packing” of authors. It could be countered, the APS Fellow and Charter Member suggested, by weighting citation counts according to the researcher’s position on each authorship list, but “in the end, I think it would affect the citation impact of relatively few people.”

Still another source of bias, Lilienfeld said, is that some research areas are temporarily “hot” and get a lot of research attention, but it’s debatable whether those authors will have long-term impact. “These researchers are often widely cited,” he said, “because there are many of them working in a relatively small number of laboratories who keep citing each others’ papers. But their impact may be considerably less than these citations imply.”

APS Fellow Ed Diener, University of Illinois Urbana-Champaign, agreed that a temporarily “hot” area of research “will certainly influence the list. In this case, we can look at citations after a period of years in order to obtain a list of classics that stand the test of time.”

To Diener, a social psychologist who is on the list, the bias favors clinical psychologists. “An important question about the ISI list is why clinical psychologists are so heavily represented compared to other fields. It might be that fields that are larger, where more researchers are working and publishing, are always represented more heavily because the number of citations is larger. Or it might be the journals that were used.

“My question is whether there are a substantial number of false negatives – highly cited individuals who for some reason are not listed. There are quite a few renowned names who did not make the list, and it is unknown why. I won’t mention names here, but there are some very important and active social psychologists whose names do not appear on the list.”

Marie McVeigh, Thomson ISI’s product development manager, said she and others who designed the Highly Cited program are “acutely aware that publication patterns differ” among scientific fields and that the differences can influence citation analysis.

But McVeigh insisted that the list remains valuable despite its limitations. “It’s one metric of the influence on the literature of a field,” she said. “We don’t equate that uniformly with the scientific importance of a discovery, but a flash-in-the-pan article is going to be washed out by 19 years of other citation data. It may be highly cited for only one or two years.” The implication is that temporarily popular articles will eventually disappear while the truly influential citations endure.

For his part, cognitive psychologist James McClelland, Carnegie Mellon University, an APS Fellow and Charter Member who also is on the list, said it was “strange that picking up on ‘what is hot’ would be viewed as a bias.”

“I would have thought that finding out what is hot would in fact be one of the very things such indices would be seen as useful for. Whether a researcher’s influence is lasting, at least by citation measures, can of course be assessed by using subtle measures, like the half-life of the citation rate, to index the duration of the ‘hot flash.’ Of course, we know that heat is only partially correlated with light.”

Bias-the negative kind-also sprouts from success. “Well-known findings and principles are often rarely cited today simply because they are so well accepted and have become incorporated into the corpus of belief,” said Lilienfeld, citing as Exhibit One Construct Validity in Psychological Tests, the classic 1955 paper by Lee J. Cronbach and Paul E. Meehl that “exerted an enormous, almost incalculable influence.” Because the principle is so accepted today, most researchers never bother to cite Cronbach and Meehl, neither of whom made the list.

“Ironically,” Lilienfeld said, “one of the best ways to be sure that one is left off of such a list may be to develop a principle that has become so widely established that no one bothers to cite it anymore.”

APS Fellow Richard McNally, who is on ISI’s list, endorses that notion, recalling a panel discussion at Harvard University, where French sociologist of science Bruno Latour was on sabbatical at the time. Unrecognized by all in the audience but the speaker, Latour observed that “disappearing into the sediment of a progressing scientific discipline is the best that one can ever hope for,” McNally said.

The field needs a “gold standard” for the list to have any meaning, according to “unlisted” APS Charter Member Howard Eisman, Coney Island Hospital. “Since our fields really have no gold standards of social impact, and since we are only beginning a science of clinical psychology, I don’t see where this list can have any value or even meaning.

“Influencing a discipline could ultimately be of no social, intellectual, or scientific value. One psychoanalytic ‘big wig’ we know certainly influences and impacts psychoanalysis, but psychoanalysis is a closed discipline unconnected to any science or general social concerns, and his influence just goes round and round and comes out nowhere,” Eisman said, requesting anonymity for the subject of his observations. “Thus, his life’s work, and his multitudinous citations, will ultimately only exist as words in print in increasing dusty, unread tomes.”

To determine who has had impact, look beyond that person’s work, Eisman said. “Unless you look outside the venue, you can confuse echo with impact.”

But “it’s just a list,” replied APS Fellow and Charter Member John Kihlstrom, University of California, Berkeley. “I don’t see any major anomalies. I recognize most of the names, and if it ever crossed my mind to think ‘what the hell is he doing on this list,’ the answer would be ‘he’s worked long enough, or done enough work in a hot enough area to garner a lot of citations.’ Citation counts are a blunt instrument, but they’re just citation counts, and as such are raw data subject to interpretation and hypothesis-testing.”

Kihlstrom is not on the list and is unconcerned. “Frankly, I do think this is a tempest in a teapot,” he said. “Sure, you can be cited a lot because you write a lot of papers, or because you wrote one paper that was really good, or because you wrote one paper that was really bad and everybody wrote to say so. But people in this last category won’t appear on other lists, and so the whole thing evens out. I just didn’t get what the fuss was all about, and I still don’t.”

“But it isn’t just a list,” Lilienfeld countered, “as presumably it is intended to provide a rough indicator of overall contribution. Otherwise, why draw it up in the first place?”

Implicit in the list, he said, is a four-fold table: those who did and didn’t make it, and those who should and shouldn’t have made it. “If this weren’t a four-fold table, then there would have been no claim to make in the first place about whether citation counts are meaningful.”

Shortcomings aside, the list does serve as a detector of distinction, McNally said, citing Dean Keith Simonton (Scientific Genius: A Psychology of Science, Cambridge University Press, 1988), who wrote: “[T]he number of citations that a scientist earns is the single most accurate predictor of scientific distinction, as gauged by such rare honors as the Nobel Prize.”

“The citation index is probably as good an indicator of influence as we have,” McNally said, “but influence is not necessarily the same as substantive contribution.” He offered an example of speciously lauded work: “One psychologist claims that moving one’s eyes back and forth while recalling a terrifying memory hastens recovery from the effects of traumatic stress. Although researchers have failed to find any convincing evidence that wiggling one’s eyes helps victims overcome trauma, this psychologist’s work is widely cited. Ergo, a high citation count does not necessarily signify a substantive achievement; controversial, flash-in-the-pan articles may accrue many citations for a year or two.”

APS Fellow and Charter Member Emanuel Donchin, University of South Florida, also finds the list a good indicator of influence. “My influence on psychophysiology, such as it is, existed before ISI chose to include me in the list. There is also no question that those papers of mine that have been cited over and over again have had an influence. Even if the citations are by authors who are critical of my methods, findings and concepts, it is self evident that the cited papers influenced the actions of my peers.”

Donchin said he also found it useful, as a department chair, “to know when one of my colleagues is widely cited, and to know who and where this colleague is cited, as it is a clear indicator of impact.” Still, “the global count is not as interesting as the distribution of citations over the papers, and their distribution over time and the citing journals. You want to watch for self-citations. It makes a difference if you are cited mostly by the mutual admiration society of your students and immediate colleagues, or you are cited by virtually anyone publishing in the area.”

James McClelland
James McClelland

McClelland pointed out that his own department at Carnegie Mellon has four researchers on the Highly Cited list: three of them received distinguished scientific contribution awards and two are members of the National Academy of Sciences. “Since most of these awards were given before these ISI indices came out, it seems likely that the ISI indices at least in part reflect the perceived importance of the contributions of the individuals.”

He said he even monitors his own citation rates “to see where I am having an impact. I think this may be helpful for younger scientists as well. This doesn’t mean I won’t pursue a line of work that experience tells me will not lead to huge numbers of citations, or that I would advise a junior colleague against pursuing such a line of work. But it provides food for thought and reflection about the nature of one’s influence that is useful for self- as well as other-guidance.”

MEDIA STARS AND SCHOLARS
According to Coyne, the list could also be useful in assessing the credibility of so-called “experts” who achieve a higher-than-warranted visibility in the public and in the field.

To support his case, Coyne cites some work by former US Circuit Court of Appeals Judge Richard A. Posner, University of Chicago Law School, who did “an interesting study of the inverse relationship between citations and willingness to pontificate on topics outside one’s expertise.”

According to Coyne, the thesis of Public Intellectuals: A Study of Decline (R. A. Posner, Cambridge, MA: Harvard University Press, 2001) is that “many accomplished scholars get tempted by the prospect of fame and fortune and write books for lay audiences that almost always mark a hasty decline in scientific impact as measured by citations.”

Coyne belongs to a group that is exploring some of the very questions implicitly raised by the ISI list, investigating “why credible, empirically validated therapies do not become disseminated and popular, while other, less credible therapies capture the public’s attention,” he told the Observer. The group hopes to access a variation of the software used by the US government to track the structure of e-mail contacts among suspected terrorists. “We look to see who reads what books. Behind all of this silliness is the sophisticated notion of linkage or network analysis. It has lots of serious applications.”

OF PAY AND TENURE
A practical, pocketbook use for lists like ISI’s is to help make tenure and promotion decisions. In the mid-1990s, Coyne recounted to the SSCP network, some faculty at one Midwestern university thought they were underpaid compared with their accomplishments. One of them was APS Charter Member Randy Larsen, now at Washington University in St. Louis. When Larsen mapped citations against pay it revealed “notable departures from the idea that one should get rewarded for accomplishments,” Coyne said.

The department responded by starting to give raises as due. “Some folks there are obsessed enough with this measure,” Coyne said, that they now track ISI updates the way “some track their TIAA-CREF” retirement accounts.

APS Fellow Kenneth A. Dodge, a developmental psychopathology researcher at Duke University who is on the list, said he believes using citation counts to help guide promotion, tenure, and review decisions is “appropriate because it is an objective measure of how many times one’s research is used by other researchers.”

However, Petty said that in areas where citations lag publication by three years or more, they “are not particularly useful for evaluating junior faculty members or for tenure decisions,” although they could be for more senior faculty, “if used in combination with other indices of accomplishment or eminence.”

“I suspect that most tenure and promotion committees recognize that citation lists are useful, but fallible, indicators of influence and impact,” Lilienfeld said. “I doubt that most place undue weight on such lists, although there are almost surely some exceptions.”

“We do need to be careful about how we interpret such listings,” Coyne noted. “I think a number of us have the same reaction: citation analysis is a terrible way of doing things that happens to be better than its alternatives. As someone who is willing to take unpopular positions when I can defend them, I see citation analysis as a healthy alternative to depending on courting favor with those in power in the profession.”

Donchin noted that all measurement processes have biases and possibilities of error. “The question is whether the metric is used in a mechanical, bureaucratic manner or in a sophisticated manner. Unfortunately, the so-called ‘accountability’ movement tends to push for brain-dead use of paper work metrics, and in this framework, citation counts are no less bad then all other tests and measures favored by legislators and accreditation bodies. In short, you have to use the metric wisely and carefully. But this is true of just about any metric, isn’t it?”

Eisman offered his SSCP colleagues a decisive suggestion, perhaps the calm before another unavoidable storm: ISI should simply crown an annual “Citation Champion.” And, he informed them, “I am changing my last name to ‘et al.’ “


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.