Scientific Integrity: We Have Met the Enemy and It Is Us

A sad chapter in scientific history closed recently when, after a l0-year investigative saga, the National Institutes of Health (NIH) exonerated a Tufts University biologist of scientific misconduct. The high profile investigation, involving work done in the MIT laboratory of Nobel Prize laureate David Baltimore, included congressional hearings and U.S. Secret Service analyses of laboratory notebooks. But despite the “no fraud” conclusion, all of us in the scientific community were tainted by the charges and we now work in an environment where protecting against scientific misconduct is a top priority, How can we, as psychologists, understand scientific dishonesty and use our knowledge to enhance integrity?

Research suggests that deception is both ubiquitous and difficult to detect. From white lies, to perjured testimony, to deceptive politics, there is substantial evidence that deceit is commonplace. Unfortunately, we are relatively poor “liecatchers” and research does not have much to offer in helping to identify prevaricators. In the case of research integrity, unless there is physical evidence of data fraud or plagiarism, it is very difficult to draw unequivocal conclusions about guilt. In psychology’s most discussed case, the well-documented charges that Cyril Burt manufactured data and collaborators, the debate over the allegations continues. It seems unlikely that detection methods alone can ensure integrity and may only serve to catch unclever researchers.

It is difficult to discern dishonest scientific practices, in part, because we have extraordinarily complex norms. Inherent to the research process is complex decision-making about how to collect, analyze, and report data. The data forgery alleged at MIT case was subtle, yet had the allegations been sustained, would have been a clear violation of scientific norms. Interestingly, we do not have absolute prohibitions against “creating” data. In psychology, along with other fields, there are a host of acceptable methods to impute missing data. We are also adept at using analytic procedures to manipulate data in order to find expected results. Such procedures are usually permissible, particularly if we report accurately what we have done.

Complex norms are also associated with how we cite the work of colleagues and even clear-cut rules are sometimes misinterpreted. A plagiarist several years ago published an American Psychologist article two-thirds of which was drawn verbatim from a monograph I had co-authored. I was told that the plagiarist thought she was merely reporting what was in the literature. Curiously, what she did would have been acceptable had she made minor changes in language and cited my work properly. Although the putative author bore the brunt of responsibility, the problem was the review process that allowed the article to be published.

The complexity of these norms makes assessing allegations about scientific misconduct difficult to reconcile. With respect to the MIT case, if we accept NIH’s final conclusion, then it is tempting to view the problem as having arisen from the ambition or jealousy of the accuser, who was a junior colleague of the researcher. But the record suggests that the young scientist genuinely believed that the data were problematic and that it was her responsibility to raise questions. It is not difficult to understand how an investigator convinced of a theory sees positive results even in their absence; so, too, is it plausible that frustration at not finding particular results leads a researcher to see mendacity in another’s successful efforts. We assume that absolute truth exists, but it, and our cognitive abilities, may be too complex for such a truth to be found. The MIT case, as well as my own experience with plagiarism, illustrates the Fundamental Attribution Error, the tendency to explain outcomes as having been caused by people rather than situations. This response is particularly pronounced when we view others’ behavior rather than our own. In an ambiguous situation, such as the report of a complex experiment, it is not surprising that anomalies would be attributed to individual malfeasance. It is certainly not surprising in the case of one who plagiarizes. But we cannot ignore the situational pressures created in laboratories and academic departments where “publish or perish” is often taken literally.

Such situational pressures lead to a system that rewards those who get significant results and encourages researchers to use every means to find significance. For students, as well as established scientists, the stakes are very large, from the right to hold a job to the ability to conduct research. With journals proud that they reject 90 percent or more of the submissions and some granting agencies eager to have similar odds (on the theory that it will, eventually, get them more funds), it is not surprising that standards have risen to what may seem impossible levels. Unfortunately, we do not have a way to reward researchers for simply “playing the game” with integrity.

What makes reinforcing integrity particularly difficult is that individuals may genuinely hold different truths, and our perceptual worlds are filled with ambiguity. We do not believe we are being dishonest when we compliment another with an inaccurate compliment, partly because we know we make the person feel good by means of the praise. And, we often temper our critique of students’ work, not because deception is a natural response for a teacher, but because good pedagogy requires that we support our mentees. Often, we disagree with one another on theoretical or empirical grounds. Although we may be tempted to see such disputes as arising from others’ dishonesty, they may be as genuine in their beliefs as we are.

Dishonesty can, of course, reflect a deliberate effort to deceive. Even so, the deceiver needs to justify what he or she does. During wartime, a political or military leader who deceives the enemy may feel fully justified. How different is their situation from that of a scientist feeling his/her deeds justified because he or she is on the brink of a discovery that has substantial benefit for society? There may be base motives, as well, as when a scholar engages in dishonesty simply to save or further his or her career. But some research suggests that these individuals felt fully justified in engaging in deception and have developed elaborate explanatory frameworks.

Although surveillance and punishment are necessary tools to maintain scientific integrity, we need to restrain our desire to use these as our principal weapons. Such techniques may have only limited utility and may, inadvertently, increase the level of dishonesty by making individuals less likely to admit errors. We need an environment in which we have internalized honesty in scientists and in which it is okay to reveal mistakes. We also need to reduce the stakes for scientists based solely on the outcomes of their research.

Some psychological research suggests that, with enough pressure, almost anyone will engage in deception. Some have argued that science is inherently self-correcting and that we should not worry about integrity because false findings would be corrected by failed replications. But the cost of following a line of research based on faulty data is very high.

Given the slow pace at which scientific discoveries accumulate, it does not seem a very workable solution. If adopted, it might further encourage dishonesty and reckless publication. Rather, we need to alter the current emphasis on individuals and work to change the environment in which science and scholarship are conducted. Our efforts should emphasize the support of honesty, rather than the detection and punishment of dishonesty, and we need to support honesty as a value.

To avoid future debacles over integrity and their devastating effects on individuals and the scientific community, two tasks seem critical: First, we need to lessen the pressure to get significant findings (in both the statistical and natural sense). Those who review tenure cases, journal submissions, and grant applications need to moderate the emphasis on outcomes. Second, we need to educate ourselves and students about how to recognize the pressures that lead to dishonesty and help them respond to them to maintain integrity. The alternative to these changes is to establish a draconian system of surveillance that would make most of us cringe. None of us wants to live in a research environment where lawyers are integral members of our research team.

If psychological science were irrelevant to modern life, few would care if we were truthful or not. But society cares deeply about what we are learning and makes our work possible. We have an obligation to ourselves, as well as those who support of research, to ensure integrity. We must learn from recent scientific history and apply our knowledge of behavior to prevent incidents of scientific dishonesty, proven or unproven.


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.