Psychological Assessment in Legal Contexts: Are Courts Keeping “Junk Science” Out of the Courtroom?

Psychological Science in the Public Interest (Volume 20, Number 3)
Read the Full Text (PDF, HTML)

Psychological tests, tools, and instruments are widely used in legal contexts to help determine the outcome of legal cases. These tools can aid in assessing parental fit for child custody purposes, can affect the outcomes of disability proceedings, and can even help judges determine whether an offender should go to prison, remain incarcerated, or be exempt from death penalty.

In this issue of Psychological Science in the Public Interest (Volume 20, Issue 3), Tess M. S. Neal, Christopher Slobogin, Michael J. Saks, David Faigman, and Kurt F. Geisinger present a systematic review of the 364 psychological assessment tools reported to have been used in legal cases across 22 surveys of experienced forensic mental health practitioners. Besides assessing the characteristics and validity of these assessment tools using legal and scientific standards, Neal and colleagues present an analysis of the legal challenges to the admission of evidence provided by the use of these tools to determine whether the results from these assessments are questioned as evidence in court and, if so, when and how often. This report thus provides an evaluation of both the scientific basis of psychological assessment tools and the court’s evaluation of them.

Expert Evidence: The (Unfulfilled) Promise of Daubert

By D. DeMatteo, S. Fishel, and A. Tansey, Drexel University

Read the Full Text (PDF, HTML)

Analyzing the psychological assessment tools used in court

The psychological tools Neal and colleagues assessed included aptitude tests (e.g., general cognitive and ability tests), achievement tests (e.g., tests of knowledge or skills), and personality tests. They analyzed measures designed to assess adults and youth that could be used to address questions such as competence to stand trial, violence risk, sexual offender risk, mental state at the time of the offense, sentencing, disability, child custody, civil commitment, child protection, civil tort, guardianship, competency to consent to treatment, juvenile transfer to adult court, fitness for duty, and capacity to waive Miranda rights (the right to remain silent). A team of coders classified ach tool for its general acceptance in the field (i.e., on the basis of published surveys, do experienced mental-health experts frequently use and endorse these tools), whether it had been subjected to testing, whether its testing had been peer reviewed, and for its technical and psychometric quality. The overall evaluation of the technical and psychometric quality of each tool relied on information about the tool’s performance in forensic contexts and its psychometric qualities (e.g., validity), as reported in the Mental Measurements Yearbook (MMY), Strauss and colleagues’ (2006) compendium of neuropsychological tests, and Grisso’s (2003) forensic competencies compendium.

Most of the tools used in courts (90%) have been subjected to testing, but information about general acceptance was available for only about half of them. Of the tools for which general acceptance data were available, only about two thirds could be considered generally accepted by the psychological community at large, and a third were clearly not accepted. Moreover, only 40% had favorable reviews of their psychometric and technical properties in authorities like the MMY. These findings indicate that there are many psychometrically strong tests used by psychologists in forensic practice, but not all of the tests used are generally accepted or have been evaluated as having high technical and psychometric qualities.

Courts’ scrutiny of psychological assessment evidence

Judges aim to apply admissibility criteria to the psychological assessment tools used in court, but they also seem to struggle to apply these criteria, which may lead assessments to rarely be challenged or scrutinized in court, even when they should be. Neal and colleagues focused the analysis on whether 30 psychological tools of the 364 studied earlier tended to be discussed and challenged by the courts. They screened a database of federal courts from 2016 to 2018 and identified 372 cases that had involved the use of at least one of the 30 tools of interest. For each case, they determined whether the tool’s admissibility had been challenged and, if so, on what grounds and with what result. Of the 372 cases, only 19 involved a challenge to a tool’s admissibility or the admissibility of testimony relying on the tool, and in only 6 cases was the psychological-assessment evidence ruled inadmissible. Most of the challenges focused on fit (i.e., does the tool serve to inform about the type of problem at issue) or validity (i.e., does the tool measure what it purports to measure), and the first resulted in more rejections of the testimony than the latter. Also, there was little relation between a tool’s quality and the likelihood of its being challenged: The three tools reviewed as most unfavorable and not generally accepted were not challenged at all.

Suggestions for psychologists, law practitioners, and members of the public

Given the mixed quality of the assessment tools used in court and the low level of challenges these face, the authors suggest that psychological scientists should create stronger measures and encourage experts to use tools that are valid and suitable for the task at hand. Specifically, Neal and colleagues point out that practitioners should be aware that a tool might be valid only for specific purposes (i.e., context-relevant validity). The authors also suggest that attorneys and judges have access to low-cost or free online resources that might help them get basic information about the different tools—for example, the MMY provides information about purpose, appropriate population, score ranges, and quality for more than 3,500 tests. Law practitioners would thus be in a better place to evaluate the foundations of an expert’s testimony and whether the information given by the tool is relevant to the case. Similarly, members of the public interacting with psychologists in the legal system (e.g., litigants) could also procure information about psychological tools so that they can discuss them with their attorneys during the legal process. Overall, Neal and colleagues hope that their findings encourage psychological scientists, psychologists serving as experts in legal contexts, attorneys and judges, and members of the public to improve their own and others’ knowledge about psychological assessment and to question these tools more often. This way, psychological experts involved in legal cases might produce the highest quality of practice, Neal and colleagues suggest.

Criteria for admissibility of scientific evidence in court—Daubert

In an accompanying commentary, David DeMatteo, Sarah Fishel, and Aislinn Tansey examine in detail the criteria for admissibility of expert testimony and, in light of Neal et al.’s article, how these criteria are faring in keeping “junk science” out of the courts. In 1993, in Daubert v. Merrell Dow Pharmaceuticals, Inc., the Supreme Court of the United States articulated four criteria for admissibility of evidence: (a) derived from methodology that has or can be tested empirically, (b) subjected to peer review and publication, (c) known potential rate of error, and  (d) achieved general acceptance in its relevant scientific community. Daubert has been extended to all forms of expert evidence and is the admissibility standard in all federal courts. However, as Neal et al. showed, judges and attorneys seem to struggle to apply Daubert because they lack the knowledge and training to fully understand the tools used by forensic scientists. As a result, inaccurate data might be regularly admitted into court proceedings, with dire outcomes. In line with Neal et al.’s suggestions, DeMatteo and colleagues propose that judges and attorneys become more informed about scientific matters (e.g., law schools could offer a Basic Science course) and that psychologists receive better forensic training and select appropriate assessment tools.

About the Authors (PDF, HTML)

See related news release.

Comments

So, what tests passed muster (60% agreed on) and which ones didn’t? The article was just cited on CBS Morning Show as casting doubt on the utility of psychological testing- which does a great disservice to everyone in the field.

i am 85 years old and practicing for 56 yes and a very frequent “testifier” and your article exactly mirrors my concerns, particularly about the Rorschach I would love to read the most recent article relative to these issues in PSPI but having difficulty getting it. Can you help? Appreciate


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.