Rhythms of Research

With the December issue of Psychological Science, James Cutting ends his editorial stewardship of APS’s flagship journal. Cutting’s tenure was a time of tremendous expansion of PS; he ushered it from bimonthly to monthly publication in 2004 and handled a dramatic growth in the number of submissions.

Following is an editorial reprinted from the December 2007 issue of Psychological Science, a fitting capstone for Cutting’s final issue. APS is indebted to James Cutting for his leadership of the journal and his contributions to the field.

Over four years, my staff and I amassed a wealth of data related to the psychology of work (e.g., Brett & Drasgow, 2002; Smither, 1988), and more specifically, to the psychology of work performed by psychological scientists doing their professional duty — publishing and reviewing. During this period, we received more than 3,850 submissions to Psychological Science and obtained more than 4,850 reviews. The collective pace with which these authors and reviewers worked provides insights into the rhythms of research.

Monthly Patterns
Submissions to any journal arrive in seemingly aperiodic clumps with considerable variability. The top panel of Figure 1 shows the record across the 48 months that I received submissions. Sustained growth is clear, but periodic variability is equally undeniable. The middle and bottom panels present two annualized plots. The first shows actual submission receipts for each month as a proportion of the number expected were there no growth in receipts within a year. The pattern is clear: The first five and last three months of the year had a relatively uniform submission rate. Summer, however, was much more active. In June and July, researchers produced about 1.2 times the mean number of expected manuscripts, and August and September were not far behind. Obviously, the summer provides a sustained period during which research can be done and manuscripts written.

Nonetheless, the assumption of no within-year growth is incorrect. Gradual overall increases are offset by local declines. These gradual increases are taken into consideration in the bottom panel of Figure 1, which shows the plot of the residual monthly proportion of submissions after continuous growth has been factored out. Unlike the one above it, this plot shows a semester effect. That is, the plot shows, in addition to the summer bulge, a peak at the beginning of each semester and a gradual falloff thereafter. January provides substantial recovery, falling on the heels of the year-end break.

Of course, this description assumes that most researchers are at academic institutions that follow the U.S. semester pattern. A number of schools have quarter systems, most European academic years are slightly different from those in the United States (September is a summer month; June is not), the systems in the northern and southern hemisphere are six months different in phase, and a number of researchers are in industries with different cycles. Despite the varied backdrop of where psychologists do their work, the pattern in the bottom panel of Figure 1 seems dominated by a domestic semester pattern of research. Note also that the decline in submissions is steeper in the fall term than in the spring. This surprised me. I would have thought that the summer break provides a larger well of data to draw upon in subsequent months than does winter break.

Weekly Patterns
Just as submissions can vary by month, they can vary by day of the week as well. Figure 2 shows receipt of submissions and also reviews by day of the week. Consider first the pattern for submissions, shown in the left panel.

Most submission data were recorded by date of e-mail. For nonelectronic submissions, we used the date on the cover letter, the date of cancellation on the postage stamp, or the date recorded on the courier pack. Dates of receipt were then binned in consecutive groups of 50, with groups (79) used as replications. Notice two effects. First, submissions declined dramatically over the weekend. This trend is hardly surprising for nonelectronic submissions, but it was evident for electronic ones as well. Second, submissions tended to increase across weekdays. For us, this effect was most prominent during 2003 and 2004, when we received many manuscripts by regular mail and by courier; the effect was not reliable in 2005 and 2006, when manuscripts were sent almost exclusively by e-mail. Nonetheless, a regression analysis revealed a reliable five-day linear trend across all four years, with a slope of 0.0039, F(1, 78) = 4.8, prep = .89, d = 0.47. This trend seems to reflect an increasing anxiety of researchers in their attempts to push their manuscripts off their desks and into submission before the weekend.

This trend contrasts markedly with that for reviews, shown in the right panel of Figure 2. The data for reviews were also binned by 50s (101 groups), according to the order of the manuscripts they were associated with. Reviews, which were always sent to us electronically, were most often done on Sunday. Monday was also a prominent day, but its reviews may also reflect the readers’ wishes to look over work done on Sunday before sending it in. The rate of review submissions fell precipitously from Monday through Friday, and the proportion for Friday was hardly greater than that for Saturday. The slope of this negative weekday trend is −0.0344, F(1, 100) = 175.6, prep > .999, d = 2.8.

It is clear that researchers allocate writing and reviewing time differentially across the week, working increasingly for themselves and less for other psychologists from Monday until Friday, then catching up with their obligations to others on Sunday — perhaps a religious residue within the laity of scientific research. Let me pursue this analysis, making several assumptions: Reviews and manuscripts are written generally by the same people, likely at the same computer, and taking roughly the same slot of daily time. That is, except on weekends, when time is made available for personal affairs (except perhaps on Sunday evenings), the available time for activities other than teaching, advising, committee work, and the like is generally fixed across days of the week, averaged across all researchers. Thus, time taken to read and write a review of another researcher’s manuscript is time that would otherwise be available for reading and writing articles of one’s own, and vice versa. Note that I am not including research time for massaging data or time spent in the lab. Note also that in this journal, length is generally the same for submitted and reviewed manuscripts. Thus, comparing weekday slopes in Figure 2, I conclude that writing a review takes about 11 percent of the effort of writing an article, or that writing an article requires nine times the effort of reviewing an article of the same scope. Assuming researchers write two to three reviews for each article they write, the pattern of reciprocity of peer review is easily understood. The work researchers do on manuscripts written by others will not exceed the work they do on their own manuscripts or the work done on their manuscripts by others.

Holidays and Vacations
How do psychologists apportion their work around holidays and fit their vacations around work? Vacation time varies across hemispheres and countries. Nonetheless, given that over the past four years, 59 percent of all submissions and 73 percent of all reviews have come from researchers within the United States, I analyzed data around U.S. federal holidays, assuming that non-U.S. work effort provided relatively uniform background noise. Except at year’s end, federal holidays fill a 2 × 2 matrix — those always on Mondays versus others, and those that are relatively “minor” versus those that are “major.” There are three minor Monday holidays — Martin Luther King, Jr.’s birthday, Presidents Day, and Columbus Day; one minor calendar-date holiday — Veterans’ Day; two major Monday holidays — Memorial Day and Labor Day; and two major non-Monday holidays — Independence Day and Thanksgiving. Christmas and New Year’s provide a ninth, much longer, and more universal calendar-date holiday to which I give separate consideration.

Submission and review receipts were analyzed for the eight Monday and other holidays, the days before and after these holidays, and, as a control, the three corresponding days for the weeks immediately before and after, for which the data were averaged. It became clear that there were no real differences between the Monday and the other holidays, so these were combined, yielding four minor and four major holidays.

Consider first the minor holidays. One would expect little or no decline in submissions and reviews for these dates. Most universities and colleges do not honor these holidays and, except for day-care concerns for children in primary and secondary schools, which usually do honor them, there would be little reason to expect any falloff. Indeed, there was none, as shown in the left panels of Figure 3 . That is, for both submissions and reviews, the functions for the day before the holiday and the day after are reasonably the same as the means for the same days of the previous and subsequent weeks, Fs(2, 6) < 1. The dip in submissions for the day after a minor holiday was not a reliable effect. It would also have no reasonable explanation.

Consider next the patterns for the major holidays, shown in the central panels of Figure 3. In general, one might expect that psychological researchers — like most people in the United States—would pause in their work on Memorial Day, Independence Day, Labor Day, and Thanksgiving. However, this did not exactly happen. With respect to submissions, although there was a slight decline during vacation days and days adjacent to them, there was no reliable change, measured either as a main effect of the vacation week or as an interaction of days and the vacation week, Fs(2, 6) < 1. I find this quite surprising. However, there was a reliable effect concerning reviews, although it manifested itself in a slightly unusual way. That is, reviews were submitted as often on major vacation days as on corresponding days in the weeks before and after, but the day before those vacation days showed a noticeable slump. This interaction was reliable, F(2, 6) = 5.2, p = .048, ηp2 = .636. Moreover, the difference in patterns for major and minor holidays was also reliable for reviews, F(2, 6) = 5.86, p = .039, ηp2 = .661. The most reasonable explanation for this effect seems to be that researchers did take a vacation, but that on the night of the vacation day (often before going back to work the next day), they wrote reviews.

Finally, consider the patterns for the year-end holiday, shown in the right panels of Figure 3. These data are analyzed differently. The holiday period is long, so submissions and reviews during this period were binned into six consecutive five-day intervals — December 14-18, December 19-23, December 24-28, December 29-January 2, January 3-7, and January 8-12. The control period comprised the corresponding five-day intervals one month before and one month after; for each interval, data were averaged across these two months. Clearly, both submissions and reviews dropped around year’s end, and the interactions of five-day interval and holiday versus control period were both reliable, Fs(5, 15) > 5.17, ps < .005, ηp2s > .63. Interestingly, it appears that researchers accelerated their work patterns a week or so before the holiday in anticipation of a relatively slack work period thereafter.

Do psychological scientists actually take vacations? Yes, at least at year’s end, they do. They also retard some of their professional work a bit just before, but not the day of, a major holiday. But mostly, I assume that psychologists take U.S. federal holidays in stride, working a bit while scheduling their real vacations rather independently of those holidays, scattering their vacations throughout the year without regard to what other psychologists are doing. This leaves manuscript and review flow relatively unperturbed, except for the great summer influx of submissions, the during-semester slowdown of submissions, and the reliable Sunday and Monday bulge in reviews.

Acknowledgments — I thank Caroline Brockner, Erin Hanlon, and Grace Wisser for their support in this project and throughout my tenure as editor of this journal. ♦

References
Brett, J.M., & Drasgow, F. (Eds.). (2002). The psychology of work: Theoretically based empirical research. Mahwah, NJ: Erlbaum.
Smither, R.D. (1988). The psychology of work and human performance. New York: Harper.




APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.