Does Psychology Need More Effective Suspicion Probes?

Suspicion probes are meant to inform researchers about how participants’ beliefs may have influenced the outcome of a study, but it remains unclear what these unverified probes are really measuring or how they are currently being used.

Presenter speaking to a room full of people.

Quick Take

A suspicious gap in reporting  Flying with unvalidated measures • Recommendations for reining in unreliable suspicion probes 

Psychologists often temporarily deceive participants about the true purpose of a study to encourage them to act naturally. Sometimes they simply hide the true purpose of a study from participants; in other cases they use confederates posing as fellow research participants.  

Deception can help minimize demand effects, whereby participants alter their behavior in an unconscious attempt to confirm the researchers’ suspected hypothesis. But participants’ suspicions don’t have to be accurate to influence the outcomes of a study. Whether a participant is more suspicious than average by nature, is generally distrusting of psychologists, or suspects that they are being intentionally misled, they may spend cognitive resources on trying to determine the true purpose of a study rather than focusing on the experimental task at hand, Daniel W. Barrett (Western Connecticut State University), APS Charter Member and Fellow Steven L. Neuberg (Arizona State University), and Carol Luce (American Express) wrote in a 2023 article for Perspectives on Psychological Science

Ideally, researchers would be able to assess the prevalence of suspicion within a study using suspicion probes, which, theoretically, allow them to determine how more suspicious participants’ responses may differ from those of their more naive peers, Barrett, Neuberg, and Luce wrote.  

Suspicion probes generally take the form of a set of verbal or written questions posed to participants after the conclusion of an experiment. These might include asking what they thought the experiment was about, whether anything seemed strange or confusing, or how they felt the experiment may have influenced their behavior. Researchers could then decide whether to 

  • exclude highly suspicious participants’ results from the study, 
  • analyze the results of suspicious and naive participants separately, or  
  • analyze suspicious and naive participants’ results together when there do not appear to be significant differences in their performance.  

But a number of methodological issues blur knowledge of the prevalence of participant suspicion and its influence on experimental outcomes, Barrett and colleagues explained. Most significantly, the suspicion probes themselves remain unverified. 

“The field has more or less forgotten about this and pretends that suspicion isn’t a problem or assesses it in a superficial manner, assuming that this tool measures it without any evidence that it does so,” Barrett said in an interview with the Observer. “We encourage researchers to rethink assumptions about this critical but underanalyzed component of research methods.” 

Although researchers currently lack the tools necessary to measure the full impact of participant suspicion, this knowledge gap likely has implications not only for studies that employ deception but for the integrity of psychological science as a whole, Barrett, Neuberg, and Luce continued. 

“Participant suspicion may be an influential factor in the ongoing replication crisis as it is an undetected variable that has been shown to affect the outcomes of psychology research,” Barrett said. Understanding the true influence of participant suspicion on experimental findings will require researchers to develop new tools for measuring suspicion, reexamine existing ones, and consider the social context in which experiments occur. 

A suspicious gap in reporting 

To gauge how researchers use suspicion probes, Barrett, Neuberg, and Luce analyzed 1,130 studies published between 2000 and 2022 in the Journal of Personality and Social Psychology (JPSP). In their review, 75% of studies reported employing some form of deception, but only 12% of all studies reported using probes to assess how participant suspicion may have influenced the study’s findings. Of those, only 77% reported on the results of their suspicion probes and just 30% discussed scoring those results, though often papers stated only that the results were scored, not how this was done. 

These findings conflict with the results of a survey Barrett and colleagues conducted in 2000 with 147 Society for Personality and Social Psychology members. In that survey, 52% of respondents who employed deception in their research reported also using suspicion probes. This suggests that respondents either exaggerated how often they used suspicion probes or that researchers may be excluding information about suspicion probes from their publications, Barrett and colleagues wrote. 

A 1985 analysis of the 1979 volume of JPSP led by John G. Adair (University of Manitoba) found that 11% of all studies and 31% of studies that employed deception reported using suspicion probes, the researchers noted. By contrast, though 12% of all studies in Barrett and colleagues’ analysis used suspicion probes, just 13% of studies that employed deception did so. This suggests that it may be less common to use—or at least report on the use of—suspicion probes to evaluate participants in deceptive studies than it was in the past. 

“Regardless, what is clear—even from the most optimistic estimates—is that accurately ascertaining the prevalence of suspicion is difficult given that suspicion does not seem to be assessed and reported with great frequency,” Barrett, Neuberg, and Luce wrote.  

Flying with unvalidated measures 

In a related study on how experimental manipulations were reported in the 2017 issues of JPSP, research psychologists David S. Chester and Emily N. Lasko (Virginia Commonwealth University) found that just 31 of 348 manipulations were followed by a suspicion probe. Studies that did use probes did not include a detailed description of the procedure. And although five of these probes resulted in participants being excluded from analyses, the researchers did not explain why these participants were considered suspicious enough to exclude or how doing so may have influenced the findings of the study. 

The analysis also showed that, over and over again, researchers used “on the fly” unvalidated measures in their studies, Chester said in an interview with the Observer. In fact, just 58% of manipulations used in these studies were accompanied by any evidence of their validity, such as a pilot study, manipulation check, or a citation to a previous study. And suspicion probes in particular have never been systemically validated. 

“So if it’s true that the majority of social psychologists are using suspicion probes, we’re all using completely unvalidated measures and potentially making decisions about our studies based on them,” Chester said. 

The absence of information about how researchers are using suspicion probes reflects a “view of research participants as passive receptacles of stimuli,” wrote Olivier Klein (Université Libre de Bruxelles) and colleagues in a 2012 Perspectives on Psychological Science article. This approach to research may stem from earlier psychologists’ attempts to raise psychology’s status to that of a “genuine scientific discipline” like physics, in which social context is irrelevant, Klein and colleagues continued. This perspective has been reinforced by a model of cognitive neuroscience that emphasizes how cognitive processes arise from neural processes in the brain while glossing over the social context in which these processes occur, the researchers argued. 

“This focus on neural, internally driven accounts of behavior relegates social influence to the status of a mere source of noise that needs to be controlled or eliminated,” Klein and colleagues wrote, but understanding the role of this “noise” is essential to understanding psychological science. “All psychology experiments are fundamentally social endeavors that necessarily involve cooperation between researchers and participants.” 

For this reason, the researchers noted, experimenters’ expectations do not necessarily need to be framed as polluting the “real” results of a study, but understanding how they may influence participants’ behavior could help illuminate the role of social context in the processes being studied. 

“We have to remain sensitive to the fact that it will never be possible to control all possible influences of these factors, for psychological experiments never take place in the ideal form that is achievable in, say, physics,” Klein and colleagues wrote. “Participants are not zombies who will respond in a predictable manner to stimulation. Instead, they continuously exert their intelligence.” 

In their analysis of articles published in JPSP between 1965 and 2010, Klein and colleagues found that the use of the terms “experimenter bias” and “demand effects” peaked before the 1980s, suggesting that interest in the influence of social context on experiments has declined in subsequent years. Additionally, in an analysis of 170 studies published in a 2005 edition of Psychological Science and 176 studies published in a 2011 edition of JPSP, the researchers found that the majority of studies in both journals failed to mention whether the experimenter was present during a study, how the study was presented to participants, and whether or not deception was used to mislead participants. Furthermore, even when studies noted the use of deception, most did not mention employing a suspicion probe to determine how this may have influenced participants’ behavior. 

“Taken together, these findings indicate that, at least in the sample of articles we surveyed, relatively little attention is paid to informing the reader about the social context of the experiment,” Klein and colleagues wrote. 

In response to this dearth of information, Klein echoed Barrett’s concerns about replication

“This may, in part, explain why it has proven so difficult to replicate psychology experiments,” said Klein in an interview with the Observer. “Addressing this problem demands [researchers] provide detailed information about the presentation of the study to research participants.” 

Recommendations for reining in unreliable suspicion probes 

Though suspicion probes can help address demand effects, even these are not a perfect solution as participants may not always report having guessed a study’s hypothesis even when they have done so, Klein said in an interview. A more reliable, but more costly, alternative would be to run a variation of each study in which participants are told to act in line with the researchers’ perceived expectations, he suggested. If the expected effect were found in both versions of the study, demand effects likely play a role in the outcome.   

Short of this, providing more detailed information about experimental conditions in journal articles could at least help clarify the social context in which a study occurred, Klein and colleagues wrote. 

As suggested by their review of articles in JPSP, the lack of information about when suspicion probes are used and what researchers do with participants’ responses makes it difficult both to identify common practices and to develop evidence-based solutions, Barrett and colleagues wrote. As a first step to address this problem, psychological science journals could require researchers to report on the use of suspicion probes, including descriptions of the probe, how it was scored, and the researchers’ justification for including or excluding certain participant responses from the data. 

“Such information would allow for a more fine-grained peer review of the study, contribute to our knowledge about suspicion in experiments, provide more transparency about how suspicion may have affected the research, and facilitate replication,” the researchers wrote. This could help pave the way for further research focused on how suspicion influences participants’ behavior, which would also require experimentally validating suspicion probes and developing trait-suspiciousness scales to accurately measure and determine the source of participants’ suspicion, they continued. 

“Suspicion is a complex, multidimensional construct,” they wrote. “Investigating and better understanding this complexity would be fertile ground for new research on the research process itself.” 

Chester, however, went a step further, arguing that researchers should abandon the use of suspicion probes until they can be validated. 

“There is a place for [suspicion probes] in pilot studies to get a sense of [participants’] experiences in a study, but we should not be using them in a quantitative sense to make decisions about including or excluding participants in our studies because we can’t use invalid measures to make those decisions,” Chester said. 

Excluding suspicious participants could also lead people from marginalized groups to be disproportionately excluded by research, he added. 

“Some people are more suspicious than others for reasons that may have nothing to do with your study,” Chester said. “Marginalized populations that have been historically exploited by scientists have every right and reason to be suspicious of research and researchers. And by excluding suspicious people we are probably excluding marginalized groups from our studies. So it’s really potentially very problematic.” 

Designing a valid suspicion probe poses a significant challenge to researchers because asking about suspicion could itself alter participants’ responses, causing them to recreate the study through a suspicious lens, Chester noted. Until this issue is resolved, suspicion—and perhaps suspicion probes themselves—will continue to influence psychological science in ways that are yet to be fully illuminated. 

Back to top

Feedback on this article? Email [email protected] or login to comment.

References 

Barrett, D. W., Neuberg, S. L., & Luce, C. (2023). Suspicion about suspicion probes: Ways forward. Perspectives on Psychological Science, 0(0). https://doi.org/10.1177/17456916231195855  

Chester, D. S., & Lasko, E. N. (2021). Construct validation of experimental manipulations in social psychology: Current practices and recommendations for the future. Perspectives on Psychological Science, 16(2), 377–395. https://doi.org/10.1177/1745691620950684  

Klein, O., Doyen, S., Leys, C., Magalhães de Saldanha da Gama, P. A., Miller, S., Questienne, L., & Cleeremans, A. (2012). Low hopes, high expectations: Expectancy effects and the replicability of behavioral experiments. Perspectives on Psychological Science, 7(6), 572–584. https://doi.org/10.1177/1745691612463704  


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.