A Call to Change Science’s Culture of Shaming
New forms of media are making it easier and easier for us to react to, and comment on, research within our community. Although free-flowing comments and criticisms can often push an argument or research program forward in a good direction, they can also derail, and perhaps even threaten, the process. I invited guest columnist Susan Fiske, a former APS president, to think about the impact that the new media are having not only on our science, but also on our scientists. Importantly, Fiske’s column is not intended in any way to be an attack on open science, but rather is a timely reminder that psychological scientists are not immune from using social media in destructive ways.
-APS President Susan Goldin-Meadow
The premature release of an earlier draft of this column provoked an online firestorm. In the spirit of colleagues’ feedback improving one’s work, this revision reflects some of the more constructive responses. The less constructive responses merely illustrate my point and are not acknowledged here. One development in parallel with this column is an independent online statement that people can sign to express concern: “Promoting open, critical, civil, and inclusive scientific discourse in Psychology,” which can be found here. Thanks to those who express support of mutually respectful discussions of our science.
Our field has always encouraged — required, really — peer critiques. But the new media (e.g., blogs, Twitter, Facebook) can encourage a certain amount of uncurated, unfiltered denigration. In the most extreme examples, individuals are finding their research programs, their careers, and their personal integrity under attack. In a few rare but chilling cases, self-appointed data police are volunteering critiques of such personal ferocity and relentless frequency that they resemble a denial-of-service attack that crashes a website by sheer volume of traffic.
Only what’s crashing are people. Some immoderate and unmoderated attacks create collateral damage to targets’ careers and well-being, with no accountability for the people engaging in the toxic behavior. Sheer volume of requests and multiple simultaneous critiques can overwhelm any researcher. More than one scientist reports being asked for a different data set every week for months, consuming all their research time for a semester or more. Several others report automatic algorithms generating automatic anonymous emails “correcting” p-values rounded to two places without affecting significance standards. Taking up research time in what often appear to be unnecessary or excessive demands can be one form of harassment.
In other cases, the tone of online critiques sometimes involves inappropriate comments that presumably would not occur face to face. Someone posted that my late father (a methodologist) would be ashamed of me. Others have impugned my motives for writing this plea for civility. Similarly, some targets have reported to me public assertions of their alleged dishonesty, incompetence, or mercenary motives. Personal insults are not scientific discourse. Indeed, speculations about another scientist’s motives would not appear in any respectful form of peer review.
Our colleagues at all career stages have reported leaving the field because of what they see as sheer adversarial viciousness. I have heard from graduate students opting out of academia, assistant professors afraid to come up for tenure, midcareer people wondering how to protect their labs, and senior faculty retiring early, all reportedly because of an atmosphere of methodological intimidation. I am not naming names of alleged victims because, to a person, these dozens of individuals tell me they are afraid to go public for fear of retaliation.
I am also not naming names of alleged bullies because rare but vicious ad hominem smear tactics are already damaging our field, and they do not represent the majority of us. Instead, I am describing a dangerous minority trend that has an outsized impact and a chilling effect on scientific discourse. I am not a primary target, but my goal is to give voice to others too afraid to object publicly.
To be sure, constructive critics have a role, with their rebuttals and letters-to-the-editor subject to editorial oversight and peer review for tone, substance, and legitimacy. Some moderated social media groups monitor individual posts to ensure they are appropriate. Always, of course, if critics choose to write a personal message to the author, that’s their business. If they request the original data, scientific norms demand delivery within reasonable constraints. All these venues respect the target.
What’s more, APS has been a leader in encouraging robust methods: transparency, replication, power analysis, effect-size reporting, and data access. All this strengthens our field, because APS innovates via expert consensus and explicit editorial policies. Individuals’ research is judged through monitored channels, most often in private with a chance to improve (peer review), or at least in moderated exchanges (curated comments and rebuttals). These venues offer continuing education, open discussion, and quality control. These constructive efforts draw on the volunteer talent of many, in the service of the greater good and respecting the individual investigator.
But some critics do engage in public shaming and blaming, often implying dishonesty on the part of the target and other innuendo based on unchecked assumptions. Targets often seem to be chosen for scientifically irrelevant reasons: their contrary opinions, professional prominence, or career-stage vulnerability.
The few but salient destructive critics are ignoring ethical rules of conduct because they circumvent constructive peer review: They attack the person, not just the work; they attack publicly, without quality controls; they have reportedly sent their unsolicited, unvetted attacks to tenure-review committees and public-speaking sponsors; they have implicated targets’ family members and advisors. Most self-appointed critics do not behave unethically, but some do so more than others. One hopes that all critics aim to improve the field, not harm people. But the fact is that some inappropriate critiques are harming people. They are a far cry from temperate peer-reviewed critiques, which serve science without destroying lives.
Let me be clear: This column does not aim to criticize such standard peer-review, or, for that matter, the newer open-science initiatives.
Ultimately, science is a community, and we are in it together. We agree to abide by scientific standards, ethical norms, and mutual respect. We trust but verify, and science improves in the process. Psychological science has achieved much through collaboration, but also through responding to constructive adversaries who make their critiques respectfully. The key word here is constructive.
Look for psychological scientists to share their insights, visions, and concerns about the future of scientific discourse in upcoming issues of the Observer.
What a remarkable article! Enlightening and frightening.
A colleague sent me this stunning article … penned by Andrew Gelman (a professor of statistics and political science and director of the Applied Statistics Center at Columbia University) on his website:
“Statistical Modeling, Causal Inference, and Social Science”
Entitled: What has happened down here is the winds have changed
“Someone sent me this article by psychology professor Susan Fiske, scheduled to appear in the APS Observer, a magazine of the Association for Psychological Science. The article made me a little bit sad, and I was inclined to just keep my response short and sweet, but then it seemed worth the trouble to give some context.
I’ll first share the article with you, then give my take on what I see as the larger issues. The title and headings of this post allude to the fact that the replication crisis has redrawn the topography of science, especially in social psychology, and I can see that to people such as Fiske who’d adapted to the earlier lay of the land, these changes can feel catastrophic.
I will not be giving any sort of point-by-point refutation of Fiske’s piece, because it’s pretty much all about internal goings-on within the field of psychology (careers, tenure, smear tactics, people trying to protect their labs, public-speaking sponsors, career-stage vulnerability), and I don’t know anything about this, as I’m an outsider to psychology and I’ve seen very little of this sort of thing in statistics or political science. (Sure, dirty deeds get done in all academic departments but in the fields with which I’m familiar, methods critiques are pretty much out in the open and the leading figures in these fields don’t seem to have much problem with the idea that if you publish something, then others can feel free to criticize it.)
As I don’t know enough about the academic politics of psychology to comment on most of what Fiske writes about, so what I’ll mostly be talking about is how her attitudes, distasteful as I find them both in substance and in expression, can be understood in light of the recent history of psychology and its replication crisis.
Here’s Fiske: …”
What follows is a wonderful commentary but also a valuable referenced timeline of articles/issues that changed the entire perception of psychological research and psychologists by others – beginning with Paul Meehl in the 1960s.
It prompts me to get on and write my next Technical whitepaper – “What is it psychologists do not understand about that word ‘accuracy’?” !!
The only reason many psychologists are upset about comments made in social media is because the claims they make in their own work/talks/public statements are invariably spun to impress. Hence, they are easy targets for anyone with a grudge, or who wishes to make ad-hominem attacks.
This entire profession of psychology could do with:
1. Understanding the constituent properties of quantitative measurement and what what means for any claim they make which relies upon an assumption of quantity.
Michell, J. (1997). Quantitative science and the definition of measurement in Psychology. British Journal of Psychology, 88, 3, 355-383.
2. Understanding the assignment of meaning by reading and fully digesting: Maraun, M.D. (1998). Measurement as a Normative Practice: Implications of Wittgenstein’s Philosophy for Measurement in Psychology. Theory & Psychology, 8, 4, 435-461.
3. Then behave like a real scientist instead of an ersatz marketing consultant: refresh your memory on how to write, speak, and convey facts honestly, without spin, by re-reading Dick Feynman’s article:
Feynman, R.P. (1974). Cargo Cult Science: some remarks on science, pseudoscience, and learning how not to fool yourself. Engineering and Science, 37, 7, 10-13.
Then, like me, you will find that no-one takes umbrage, writes nasty comments, or indeed shows much interest in what you publish, say, or convey publicly – because you are brutally honest about reporting the magnitude of error alongside the accuracy, and the key assumptions made (and justified), for every result you present. And no silly hand-waving about trivial effect sizes unless you can show =empirically= why they may be important (usually for epidemiological reasons, not the kind of defensive tripe served up as a justification by so many psychologists). After all, it’s why this was published:
Ferguson, C.J. (2015). “Everybody knows psychology is not a real science”: Public perceptions of psychology and how we can improve our relationship with policymakers, the scientific community, and the general public. American Psychologist, 70, 6, 527-542.
I’m a scientist, I understand measurement, and I quantify accuracy and error honestly. As Feynman put it:
““… Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can – if you know anything at all wrong, or possibly wrong to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it.” P. 11, 1974.
Now look at the articles published in APS journals .. how many authors are even aware they have implicitly assumed psychological attributes vary as SI physics base and derived unit-quantities, then go on to make virtually deterministic claims of effect based upon more inaccurate than accurate effects.
Until psychologists show the kind of scientific integrity that Dick Feynman spoke of, they will continue to attract ridicule, insult, and the indifference of the public outside of academia.
And yes, it’s tough to show that integrity – as evidenced by the refusal of almost the entire profession to accept the logic and brute facts of Michell and Maraun. Instead, each scholar has been shunned in various nasty ways by colleagues and the profession. Which is why I have little time myself for those who prattle about being ‘psychological scientists” without any idea of what it actually demands from you as an individual to assume that mantle.
The shrieks and howls of anguish from many in psychology at being publicly criticised are merely the result of their own disingenuity in the reporting of their research.
Thank you for this updated version of the article. I must say it is somewhat ironic that the very social media reaction this criticizes appears to have been the reason for why the original version was rewritten. But I guess that’s not really that important here.
I genuinely empathize with some of the sentiment expressed here. Public shaming and hostility aren’t productive for scientific discourse. But I think it is also important not to lose track of a few points here:
1. The problems with unmoderated social media are not specific to (psychological) science and in fact these twitterstorms aren’t particularly bad compared to the general world. I do agree this is one of the dark sides of these new media that they allow this human tendency to get out of hand so easily. But it is also impossible to simply turn back time on that. Social media are here to stay and short of some serious global-scale censorship this means it is impossible to stop people from publicly criticizing scientific studies using a tone you don’t like.
2. The stories of people opting out of science etc are harrowing. I have never encountered anyone in real life who had this experience but I am willing to accept that people might have had these experiences. However, what your article neglects to mention is that for every such story there are also countless stories of people failing to have success in academia in spite of doing excellent research, perhaps because they wasted years trying to follow up on non-replicable findings resulting in nothing useful, because of interference from powerful individuals who dogmatically oppose challenges to their theories, or because they became concerned with the questionable research practices they were coerced into using.
3. I don’t think statcheck posting automatic comments on PubPeer really constitutes online harassment. For one thing, it isn’t selective. It posts its results about any of the 50K papers it analyzed. And if it does flag up any errors that are relatively minor, there is hardly any major stress associated with that. As far as I have seen, nobody suffered any damage to their reputation due to one of these automatic comments – but some may very well have damaged their reputation by their hysterical reaction to these comments.
4. I agree that continued criticism can turn into a kind of denial of service attack. People in this situation need appropriate support to manage that better. However, there are a few remedies to this problem. If all data were publicly available (as many journals and funders already require), constant data requests would be a thing of the past. Also, if sharing your data set takes up a semester of your time then you are probably doing something seriously wrong. If it takes that long to curate and organize a data set to be shared then how can you make heads and tails of it yourself? Finally, if people stopped treating every challenge to their theories as a world shattering ambush and instead viewed science as the collaborative effort it should be, the sooner the discourse will become more civil. Yes, there can be nasty people but you can kill them with kindness.
In light of recent commentary (http://www.nature.com/news/young-talented-and-fed-up-scientists-tell-their-stories-1.20872) regarding the feelings of GenX researchers and the pressures that they are under to compete with more established Baby Boomers for funding in addition to having to measure up to their elder colleagues for promotion and tenure, I wonder if much of the animosity being generated is as much about a generation (myself included) who feel that it is impossible to compete under the current circumstances and see a rise in ethical lapses in the face of the publish (or fund) or perish expectations. Younger researchers are grappling with how to have the production levels that our senior colleagues have established when positive findings that are substantially novel are still the expectation (I continue to see fellow reviewers argue that quality papers should not be published because they are not earth-shakingly novel). The inference is that incremental science that replicates or fails to replicate prior findings are not acceptable for publication and are definitely not worth funding. Add to that the increasing skepticism on low powered studies with weak statistical evidence (which was considered standard a decade before) and more sophisticated statistical models means that valid, publishable findings are even more difficult to produce.
The feeling among younger researchers is that the bar is raised even higher with respect to both getting published as well as raw numbers of publications/grants required for advancing in our careers. I was an undergrad in Dr. Fiske’s class at the University of Massachusetts and have nothing but affection for her. I don’t find her defense unreasonable. My sense is that, there is a growing angst against from younger scientists that a new set of rules are in place that we are forced to play by and those rules were not in place before which benefited our elder, more established colleagues (without imputing any mal-intent).
Methodological researchers are now in the process of sniffing out those errors and commissions in published papers. We can assume their motives are generally for the advancement of science, but I do agree that the commentariat that follows is tinged with an air of revanchism, and social media has been a strong venue for such airing of grievances. For such a discussion to move forward, researchers on both sides of the (generational) divide need to focus on the science and not impugn one another’s motives. Insulting the integrity of established researchers clearly does not advance science nor does attacking attempts to replicate findings and establish the rules of evidence or “truth”.
In short, there is a need to understand the generational divide underlying the divisiveness. As a GenX researcher, I cannot deny the dread of living in the shadows of giants particularly under the current constraints. Since methodological constraints that advance science are not deserving of such angst, naturally our attention turns toward those whom we are measuring ourselves against and who, in turn, serve as the judges of our careers.
Dear Susan Goldin-Meadow,
I wonder whether readers are supposed to consider Susan Fiske’s comments in the Presidental Column as a single opinion or as representative of APS leaders’ opinions or APS as an organization. As context matters, it would be helpful to know the context of publishing Fiske’s personal opinions in the observer. There are many other former presidents and knowing how they feel about these issues would be helpful.
Presidential columns in the Observer reflect the opinions of the president or the guest columnist, and not the organization itself. I can’t speak to the opinions of former APS Presidents, but I can say that I asked Susan Fiske to comment on the perils of new forms of social media because I had become concerned about cases where the media had been used destructively.
How my open discussion of your research affects your career isn’t any of my concern. That’s between you and your seniority, promotions and tenure committee. If science is a genuine quest for truth about nature, then how discussions affect careers is merely a secondary issue.
Those of us who work at institutions of lesser-renown, or in the non-tenure track, aren’t necessarily going to be sympathetic to your plight. Welcome to the other side of life!
Unmoderated comments have been going on in private since the beginning of science, in seminar rooms, kitchens, back yards, taverns and long strolls across campus. The only difference now is that such private commentary can be more widely shared.
I am concerned that some within the scientific ‘community’ seem more interested in making life pleasant for themselves than uncovering the wrong-doing that harms patients and wider society.
It may be that speculations about another scientist’s motives, honesty or competence would not appear in any respectful form of peer-review, but maybe peer review can be too respectful? No-one can seriously believe that scientists are consistently virtuous and capable. Some do not deserve the respect that they are routinely given. We’ve come to realise that other authority figures should have their motives, honesty and competence questioned, so why not scientists too?
A recent example of complaints about ‘tone’ being used to distract attention from seriously flawed research is the UK’s PACE trial. Here it was patients who were forced to take action after the failure of peer review and the UK research community. It was only after patients took the time to identify and explain problems, and then fight a legal battle to gain access to annonymised trial data, that it was revealed that the trial’s pre-specified recovery criteria showed that treatment led to no improvement in recovery rates. Unjustifiable deviations had led to the supposed success of the interventions developed by the trial’s researchers: http://www.thecanary.co/2016/10/02/results-really-didnt-want-see-key-mecfs-trial-data-released/
For years these researchers have argued that patient complaints about their work constituted ‘harassment’. When the minutes of a private meeting on this campaign of harassment were secured through a freedom of information request it was revealed that the three forms of harassment they were concerned about were 1) FOI requests 2) complaints and 3) debates in the House of Lords. There was attempted to use this ‘harassment’ to avoid the release trial data, but a legal tribunal rejected their concerns and concluded that the “assessment of activist behaviour was, in our view, grossly exaggerated”.
Different people communicate in different ways. Some people, especially patients, can see flawed or spun research as an urgent moral concern in a way that may be off-putting to those researchers who would like problems to be identified with a more gentle tone of voice. It is important to recognise that tone policing often serves to favour the interests of those in positions of authority, and exclude those who are most desperate for improvements to be made. In the case of the PACE trial, it has been actively used to try to smear patients who identified serious problems with an influential piece of research, and distract attention from the legitimate concerns that they have raised.
Like Fiske, I know many young scientists leaving research. The overwhelming reason for their departure is the pressure and competition. Another important factor is disillusionment at the disconnect between successful publication and their perception of real science. In contrast, public shaming or harassment is not a factor in any case I know of (unless you count people who were involved in misconduct). It’s simply not an issue on their radar at all. It would therefore be interesting to know more context about the cases Fiske describes.
On the specific issue of StatCheck comments on PubPeer (one form of “denial of service” alluded to by Fiske), the comments are perfectly clear about the limitations of the tool. There are some false positives, on which the authors are working. But I’m happy to bet that StatCheck’s false positive rate is much lower than for conclusions of papers published in this journal. Statcheck false positives are also much less harmful and easier to correct.
Journals should stop accepting papers with tiny sample sizes, borderline correlation claims, bad statistics & other dubious practices. They will not. Peers should refuse to review such papers because it is not science. Science must put clear blue water between itself and sciencey cargo cult.
If science can’t clean its own dirty linen in private, social media will in public. Paul Barrett hit on the nail on the head, but the problem isn’t only in Social Psychology.
As for “peer critiques”. Who are the peers in Social Psychology? Other social psychologists who’ve all adopted the same bad statistical practices and mindset. How about farming out peer review to statisticians as well. Pay them if you must.
PS: I had a bad experience recently reading a paper making evidence-based claims from data. Authors’ press release was précised at 12 web news sites. It was a junk paper and the claims were untrue. But the peers and journal editor were happy with it. Even after they had to change 95% of one data set because they copied it wrong. Editor said he, and peers were happy to accept paper’s conclusions had not substantially changed!! I’m not a scientist, but when cargo cult sciencey tosh is published I will say so.
“Some immoderate and unmoderated attacks create collateral damage to targets’ careers and well-being, ” <- I would love to see those 3 authors have their careers damaged for publishing that junk article. [ No. It was not Social Psychology. ] Perhaps 3 more competent or honest academics will get work instead?
As a young scientist (in psychology/behavioral neuroscience) nearing completion of my PhD, I know of exactly 0 young scientists who have left research because of public criticism of established research/ers. However, I know of at least a handful of people in my cohort or in adjacent cohorts who have left research because:
1. They were disillusioned by the amount of “bad science” that exists in the published record and because they were tired of the pressure to publish what they considered to be “bad science” in order to help their PIs win grant funding or to improve their own chances at getting funded. Try criticizing an established researcher as a student and see how it feels to get legitimate concerns dismissed on the basis that you are “inexperienced” and do not have “sufficient expertise to articulate legitimate criticisms”.
2. Because they felt like the cards were stacked against them and that staying in science was against their own best interest — the low-hanging fruit have been picked (but perhaps via flawed studies that no one cares about validating), the contemporary research questions are extremely complicated and difficult to approach, projects that would have been published in Nature or Neuron in 2007 are difficult to get published in a low-tier journal in 2016, the standards of rigor have been raised, funding has been reduced, and jobs in science have little security.
I love science, but it is hard to not feel that to stay a scientist is to handicap my ability to have a stable future. My fiance (who I met in graduate school) left science for industry before completing her PhD, and now she makes 4 times as much as I do, has a predictable work schedule, and good job security. She gets raises when she learns new skills or surpasses expectations! I get a line on my CV that may not mean anything for me in the future. Not to mention, I have to pretend not to notice (or worse, participate in and indulge) all the noise-mining that goes on around me and that gets published in high-tier journals while trying to carefully comb through the literature to find things that are probably worth studying.
To be clear, no one in my cohort cares about the careers of established older PIs. Many of us have little faith in the quality of their work, in the strength of their commitment to science when there is potential for damaging their careers and/or “professional reputation” or challenging their “expertise”, and would be happy to see them and the system that supports them collapse so that we could do science we have faith in. In summary, there is little sympathy for those above from those below.
APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.
Please login with your APS account to comment.