Loved, hated, and a source of widespread controversy, journal impact factors (JIF) have taken on a unique role in scientific publishing. These little numbers are considered a measure of a journal’s importance. However, in an article in Perspectives on Psychological Science, Peter Hegarty and Zoe Walton question whether JIF actually measures the importance of psychological-science articles.
JIFs, which reflect how much individual articles published in that journal are cited, are traditionally created using citations from the database Web of Science. But this search engine is more focused on natural science than psychological science, and it may underestimate the importance of psychological-science articles. Hegarty and Walton chose to analyze citations from the database psychinfo to assess how JIF specifically relates to an article’s importance to psychological science.
The authors scoured psychinfo for citations of over 1,000 articles published in 9 leading psychology journals. They discovered that JIF was not the best predictor of the number of citations an article actually received. In fact, an article’s length and number of references were more accurate predictors of how many times the article was cited. They also found JIF was biased against articles in which the first author was a woman.
So what do these findings mean for JIF and psychological science? Overall, JIF may underestimate the impact of both of social science research and research conducted by women. The authors admit that JIF has some validity in measuring the impact of psychological science articles, but they caution that “predictions and decisions that are made solely on this basis are not ideal.”
Hegarty, P., & Walton, Z. (2012). The Consequences of Predicting Scientific Impact in Psychology Using Journal Impact Factors. Perspectives on Psychological Science, 7 (1), 72-78 DOI: 10.1177/1745691611429356