Between Truth and Advocacy

Phoebe C. Ellsworth has spent 4 decades applying her expertise as an empirical researcher to hot-button policy debates about decision-making in juries, attitudes toward the death penalty, and eyewitness identification. Like many other researchers bridging the gap between basic science and applied research, the 2016 James McKeen Cattell Fellow has faced the dilemma of balancing her professional values as a researcher with her personal values as an advocate.

“Applied researchers may be more biased because we believe our findings will do good for the world, not just good for science,” Ellsworth said during her award address at the 2016 APS Annual Convention in Chicago. “On the other hand, I think being a scientist is a fundamental part of our identity, and it would be really devastating to most of us to be accused of cheating or even unconsciously biasing science. So that’s what the dilemma is.”

Ellsworth focused her address on the challenges of conducting unbiased research while also working as an advocate on social policy issues.

“This is an award for applied research, so what I want to do today is open a discussion about a dilemma that’s faced by all of us who do applied research, and that’s how to do unbiased research on controversial social issues,” Ellsworth said during her award address. “Our field discovered these biases. We ought to be aware of them.”

Are You a Plato or a Pericles?

The debate about whether an individual can successfully balance being an advocate for social change while retaining a fair, unbiased view dates back to the contrasting career choices of Pericles and Plato in ancient Greece.

“According to Plato, you can either be an advocate for policy, like Pericles, or you can be a seeker of truth, like Plato,” Ellsworth explained. “If you’ve ever read Plato, there’s no question which is the deeply inferior choice to make and which is the superior choice to make. And he says you simply can’t do both.”

This point of view is still commonly expressed today. The attitude is that it’s impossible to do scientifically acceptable research on any topic that involves your personal values. This leaves applied researchers with a simple choice: Give up the values or give up the research.

However, scientists often are drawn into policy-focused research specifically because they see an opportunity to correct the biases propagated by earlier so-called “basic” research.
Ellsworth experienced such bias firsthand when she was rejected from graduate school at Harvard University for being a woman. When she asked why her outstanding academic record wasn’t enough to gain entrance to Harvard’s psychology program, she was told that a study had concluded that “women lack the stamina to make it to the PhD.”

“The attitude was that competence in all its variety — scientific competence and leadership competence — is somehow crammed onto this tiny shriveled-up little Y chromosome,” Ellsworth said. “And although I’m the first to admit that some feminist research has been biased, it’s not more biased than the basic research it was responding to.”

Applying Science to the Law

Huge policy decisions are made every day not on the basis of current research, but instead on outdated intuitions about human behavior. When the Supreme Court decided that group decision-making was the same in a six-person jury as in a 12-person jury, they relied on their own common sense, not on any research evidence, Ellsworth said. Two years later, they again used this common sense to conclude that giving up the rule of unanimity would not change the way juries deliberate, she added.

“It’s not that we had tons of evidence one way or the other, but that we had no evidence one way or the other,” Ellsworth said. “That was offensive to many of us. We shouldn’t be making decisions about human behavior without any information on human behavior.”

The reality, she said, is that without the research, it’s impossible to know whether a jury of six people makes as careful and accurate decisions as a jury made up of nine or 12 or 50 people, or whether a jury that doesn’t have to reach unanimity will reason as thoroughly as a jury that does or will give as careful consideration to all points of view.

Even the selection of research questions is fraught with accusations of bias, Ellsworth said.

“I once did a study showing that giving any individuating personal information about somebody who has committed a murder — like, ‘This murderer eats Cheerios for breakfast’ — [causes] people suddenly [to] get a lot more lenient” when deciding whether to mete out a death sentence, Ellsworth said. “This is not diagnostic, I think, of murderous tendencies, although I haven’t done the research to find out.”

Somebody could easily point out that this study has humanized the murderer and therefore is biased, Ellsworth noted. If the study had instead focused on humanizing the victim, people might favor the death penalty more strongly, she said.

“In choosing the research I do, I admit that the choice is likely to be biased, but I don’t think it’s realistic to ask people not to study the questions that interest them,” she said.

Ellsworth doesn’t believe that the pursuit of one’s social values has to bias one’s methodological practices. For those pursuing both scientific rigor and social justice, she has identified three areas where researchers should be especially vigilant about the influence of their values and biases: the evaluation of research, the conduct of research, and the communication of research.

Methods Section Madness

Ellsworth had several practical proposals for steps that all researchers — not just those working on policy-related issues — can take to reduce the role of bias in evaluating research that they read. One idea is to simply read the methods section of an article first.

“Before you know the findings, read the methods and then read the results — forget about the introduction and discussion,” Ellsworth advised. When it comes to evaluating research during the peer review process, she suggested that journal editors consider sending out the methods and results sections for a first pass at review, then sending the whole article for a later final review.

Another simple step to reduce bias, Ellsworth suggested, is actually talking to the people who disagree with you: “When I’ve said this to colleagues, they say, ‘I don’t know anybody who disagrees with me.’ Well, that’s actually part of the problem, sweetie. I’m not entirely persuaded by the argument that it’s impossible to find somebody who disagrees with your brilliant ideas, because they are out there.”

In conducting research, using checklists, a practice that helps make surgeries safer, also could help weed bias out of research, she said. Famously proposed by the surgeon Atul Gawande, checklists in the operating room help ensure that no important safety steps are left out during surgeries. Similarly, a simple checklist could cover basic methodological factors, such as experimenter bias or control groups, when evaluating a research paper.

“You actually become a lot more trustworthy person if you’re willing to admit that even people who agree with you can do quite inferior research,” Ellsworth noted.

When Reporters Come Calling

When it comes to communicating applied science, research that directly addresses social issues often ends up reaching a great proportion of the general public and the media. There is already tremendous pressure to overclaim in science, and the added attention from the public and the media can put even more pressure on researchers to make overblown claims.

“Reporters want your research to be earth-shaking. They want to hook their readers by saying that ‘Science will never be the same again after this astonishing scientific breakthrough,’ so you want to dampen that down,” Ellsworth suggested. “Reporters kind of like unanswered questions, [and] raising new questions is sometimes a good way to distract them from the fact that you haven’t actually changed American civilization as we know it.”

Ellsworth also pointed out that exactly who ends up getting their voice heard in the media is itself a very haphazard process. If you miss the reporter’s call, they’re likely to move on to the next expert on their list.

“We can assume that our colleagues will be skeptical about our claims, but people who are not scientists — legislators, judges, reporters, and the public — are more likely to read only the introduction and discussion,” Ellsworth said.

To help battle misinformation about the death penalty, Ellsworth serves on the Board of the Death Penalty Information Center, a reliable database of factual information for both the media and the public. She supports creating similar resources of validated, expert information on topics within psychological science to serve as invaluable resources for both the media and the public.

“What I’m really hoping,” Ellsworth finished wryly, “is to raise ideas among you that you will send to me, and I’ll put them into a paper and take credit for all of them.”


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.