The law deals fundamentally with human behavior. It deals with choices people make — to do right or wrong, to lie or tell the truth. It deals with perceptions and memories — what color shirt was the mugger wearing? It deals with decision making — should I vote guilty or not guilty? So it is only natural that psychological scientists should play a role by bringing science to bear on these questions.
Legal psychology is often thought of as a relatively recent area — it was only in 2001, for example, that the APA recognized forensic psychology as a subfield — but psychology has been actively applied to legal contexts nearly as long as there have been psychologists. Psychologists have been testifying as expert witnesses on topics like memory distortion and false confessions, and writing books about psychology and the law, for well over a century. But it is in recent decades that psychologists have begun to have a real effect on influencing how criminals are caught and how to avoid problems like mistaken identification that result in locking up the wrong people.
We All Love Serial Killers
In its connection to the law, psychology has perhaps garnered the most public interest for its role in helping the good guys outwit the really, really bad ones — the cold-blooded killers, bombers, and rapists whose heinous crimes dramatically challenge the deductive acumen of the people hunting them. Note: psychology, but not psychologists.
The methods go back to the time of the fictional Sherlock Holmes: The solitary genius profiler uses a mix of personal investigative experience and intuition to build a picture of the offender’s lifestyle and personality from clues left at crime scenes. The first “modern” case was the profile of Jack the Ripper created in 1888 by police surgeon Thomas Bond. Based on his autopsy of one of the killer’s victims, Mary Kelly, as well as details of the investigation, Bond surmised that the killer was a quiet, middle-aged, well-dressed, cape-wearing, hypersexual loner who didn’t know anatomy and thus wasn’t a doctor or a butcher. It sounds specific on the surface, but in the context of 1880s London, did it really narrow things down? It certainly didn’t help police catch the culprit, who remains unknown to this day.
This is more or less how offender profiling has been done even until now, and without the track record crime fiction audiences are led to believe or much grounding in real empirical research.
The man who turned offender profiling from a deductive art into a true psychological science is David V. Canter, now at the University of Huddersfield in England. Canter singlehandedly invented the field known as Investigative Psychology and proved the effectiveness of applying science to catch criminals in the late 1980s by helping police in South England catch the “Railway Rapist,” a man who had been raping and then strangling young women who were waiting on railway platforms at night.
Most psychological scientists interested in personality and individual differences look at people’s characteristics to predict how they will behave in various life situations. The criminal investigator has the opposite, and considerably tougher, task, according to Canter: to use evidence of a person’s behavior to construct a picture of his or her individual characteristics.
To try and catch the Railway Rapist, police had interviewed and taken blood samples from myriad suspects in a database of people with a violent criminal record. But when Canter applied his methods to the problem, he told the police to narrow their search considerably — to seek a skilled or semi-skilled laborer in his mid to late 20s who worked weekends and had few friends; who was interested in martial arts, swords, and knives; who was physically small and had feelings of unattractiveness; who lived within the vicinity of his first crime; and who had other highly specific characteristics. His description precisely fit one man in the database, John Duffy, who was later given seven life sentences based on the forensic evidence found at his home.
The method of “psychological offender profiling” Canter has developed uses psychological statistical techniques like factor analysis to classify types of offender behaviors and determine what behaviors truly help distinguish different categories of criminals. Canter and his colleagues found, for example, that the classification method used for years by FBI profilers to distinguish serial killers based on how organized (i.e., methodical and premeditated) versus disorganized their crimes appeared didn’t hold up empirically. All killers display a mix of organized and unorganized behavior. Instead, Canter and colleagues discovered that it is more telling to look at the specific, distinctive ways killers interact with their victims and to pay close attention to the physical locations of the crimes. (For more on offender profiling, see Canter, 2011; also Winerman, 2004; for more on the Railway Rapist case, see Canter, D.V., 1994.)
Going Beyond Fuzzy Memories
It is frustrating when criminals remain at large. But it is truly saddening when the wrong person gets accused and put away. Since 1989, over 270 people imprisoned for serious crimes such as murder or rape have been later exonerated because of DNA evidence (see www.innocenceproject.org). This number represents just the tip of the iceberg: Only a few types of crimes leave DNA evidence that can ever be used to exonerate innocent convicts (see Kassin & Gudjonsson, 2004). The number one reason innocent people are wrongfully convicted is faulty eyewitness evidence —roughly 75 percent of DNA-exonerated individuals had been convicted because eyewitnesses made a wrong identification (Wells, Memon, & Penrod, 2006).
Eyewitnessing is all about perception and memory, and abundant research has shown just how limited perception can be in the stressful context of a crime, and how downright wrong memory can be in whatever context you care to name. University of California-Irvine psychological scientist Elizabeth Loftus is a frequent expert witness in criminal trials on the subject of how memory can be retroactively distorted by leading questions and misleading information presented after an event. In one of countless experiments on this subject, participants saw a series of photos of a man stealing a woman’s wallet and putting it in his jacket, and they later heard verbal descriptions of the event mentioning that he put the stolen wallet in his pants pocket. A significant number of participants then misremembered having actually seen the man put the wallet in his pants pocket. Numerous studies using the same basic paradigm have revealed similar effects for verbal memory and memories of faces. (For more on misinformation and its effects on memory, see Frenda, Nichols, & Loftus, 2011.)
Misinformation doesn’t have to be deliberately seeded, as it is in the laboratory. Seeing or hearing different accounts of a crime can pollute an eyewitness’s own memory, as can the questions of an interviewer. Standard police interviews consist mainly of perfunctory questions, focused on specific details, that demand only short answers and that often lead the witness (“Was the man’s shirt blue?”). Such questioning can introduce misinformation effects, which ultimately taint the most powerful evidence in the case.
The cognitive interview developed by Ronald P. Fisher (Florida International University) and R. Edward Geiselman (UCLA) incorporates more of what we know about how people really remember things and is designed to avoid misinformation effects. Witnesses are asked to reconstruct the crime from different perspectives and discouraged from guessing where their memory is uncertain. And questions are designed to be neutral (“What color was the man’s shirt?”). Extensive research has demonstrated this method’s substantial benefits over previous methods, both for obtaining useful information and for eliminating errors. (For more on interviewing techniques, see Fisher, Milne, & Bull, 2011; Wells et al., 2006.)
The Usual Suspects
So far, the cognitive interview has not been widely adopted by police departments, but a related area in which psychological research is having a positive effect is the police lineup. For decades, most police departments have conducted police lineups the way we’re used to seeing them on TV: Accompanied by the detective on the case, a witness is shown a group of people — all at once, and either in the flesh or in photographs — and asked if he or she can pick out the suspect from the group. Psychological research shows that this standard lineup procedure is highly error-prone and also leads to innocent people winding up in jail.
First of all, witnesses can be biased to select someone from the group, because they assume that the likely perpetrator is among them, which is not necessarily the case — the suspect in the lineup may actually be innocent. And when the suspect and fillers are shown simultaneously, witnesses who aren’t immediately sure will compare the various individuals and may simply select the one whose characteristics best match their memory of the perpetrator (e.g., a tall, thin man). Again, the “best match” could be innocent. Plus the detective, who is intimately familiar with the case, can knowingly or unknowingly influence the witness.
Research by psychological scientist Gary L. Wells (Iowa State University) has led to the creation of alternative procedures that help minimize the danger of mistaken identification. Presenting an unspecified number of individuals (or their pictures) one by one prevents witnesses from making selections based on comparisons or making selections they are unsure of. Double-blinded administration also protects against the detective’s influence the same way it protects against the researcher’s influence in a psych experiment.
Research on lineups has already led to reform. In August, New Jersey’s Supreme Court changed the state’s rules for the conduct of lineups and also made it easier for defendants to challenge witness identification evidence (Goode & Schwartz, 2011). This month, the United States Supreme Court is also revisiting lineup procedures, and if the ruling is similar to the New Jersey one, it will hopefully have much broader effects on police departments across the country. (For more on lineups, see Brewer & Wells, 2011; Wells et al., 2006.)
‘I Did It’
Mistaken eyewitness evidence is number one, but the number two reason innocent people go to jail is, of all things, false confession. According to psychological scientists Saul Kassin (Williams College)and Gisli Gudjonsson (King’s College, London), 15 to 20 percent of DNA-exonerated convicts had falsely confessed. People can do it out of desire for notoriety or a wish to protect the real perpetrator, or mental disorders can also be a cause. But a big factor is the coercive way suspects are interrogated.
Coercive interrogations are the norm in the United States. The so-called Reid technique widely used by American police interrogators is an accusatory method that essentially presumes the person being interrogated is guilty. First, subjects are detained and isolated to heighten their anxiety. Then they are confronted with purported evidence of their guilt (which may be fabricated), and their denials are simply disbelieved; they will also be warned of the consequences of continuing to deny their guilt. Then the interrogator seeks to gain the subject’s trust and “minimizes” the crime, suggesting that the victim deserved it or suggesting other excuses that would let the suspect admit guilt but still save face.
Innocent suspects may end up confessing to extricate themselves from the stress of the situation or even, in some cases, because the “proof” presented to them makes them begin to doubt their own innocence (“maybe I was involved in the crime…”). Some suspects, such as people with mental disabilities, are particularly vulnerable.
Psychological scientists have made inroads to improve things. It hasn’t yet been adopted in the United States, but a model called PEACE (for Preparation and Planning, Account and Clarification, Closure, and Evaluation) has been adopted in the United Kingdom and other countries. It avoids manipulation, pressure, and leading questions; police are prohibited from lying and presenting false evidence; and interrogations are videotaped to enhance police accountability. (For more on false confessions, see Gudjonsson & Pearse, 2011; Kassin & Gudjonsson, 2004.)
In the history of forensic psychological science, it seems like coming up with a reliable technique to detect lies is the Holy Grail. Since antiquity, various physiological responses have been correlated with deception — changes in heart rate, for example, were noted by 4th-century BC physician, and since the 19th century other responses such as changes in respiration, blood pressure, and galvanic skin response have been connected to lying. But none of the physiological signs of lying have proven to be completely telltale, nor are supposed behavioral indicators like fidgeting or a shifty gaze. Even truth-tellers can be stressed or agitated, and liars can be cool and confident.
University of Portsmouth psychological scientist Aldert Vrij and his colleagues have developed effective methods for detecting deception that involve looking at cognitive load. The principle is that liars have to keep more things straight in their heads when they are being untruthful: Besides knowing what really happened or what they really did, they have to tell a false story that masks their guilt, monitor that story for inconsistencies, and observe whether their story is being believed. Making it even more difficult to tell the story, says Vrij, can be the key to getting liars to give themselves away.
One way to add difficulty is to make the interviewee tell his or her story in reverse order. Someone telling the truth should have no trouble with this, but a liar must reconstruct the events from a memorized narrative (if they’re not making it up on the fly), and they‘ll find the task more difficult. Another approach is to ask the interviewee to maintain eye contact with the interviewer while telling the story. Asking questions that liars won’t anticipate, such as asking for specific details about the setting, or asking the person to draw the setting, can also be highly diagnostic, as liars’ versions will generally be less detailed. (For more on verbal and nonverbal lie detection, see Vrij, Granhag, Mann, & Leal, 2011; Vrij, Granhag, & Porter, 2010.)
The forensic application of behavioral science in the future could be shaped by neuroscience methods. For example, in 1999, University of Pennsylvania psychiatrist Daniel Langleben hypothesized that the added mental effort in lying could be detected by looking at people’s brains directly. He put college students in a scanner with the challenge to convincingly lie about a playing card they had been shown. The reward for doing so was a hefty twenty bucks. But they couldn’t do it. Lying always resulted in significant prefrontal brain activation that truth-telling didn’t require. The specific areas that were engaged by deception included the anterior cingulate cortex, which is known to activate during other mentally challenging activities.
Harvard psychological scientist Stephen Kosslyn subsequently hypothesized that different kinds of lies would look different in the brain. Indeed, spontaneous lies activated the anterior cingulate cortex, as expected, but lies that had been extensively rehearsed involved different brain areas, such as those areas involved in memory.
fMRI isn’t the only technology being tested. Jennifer Vendemia, at the University of South Carolina, uses EEG recordings of event-related potentials, and has found that lying about a visual stimulus consistently takes about 200 milliseconds longer than telling the truth about it.
So far, neuro-evidence has not held up in court — the judge in a 2010 fraud case in Tennessee ruled fMRI evidence pointing to the defendant’s honesty to be insufficiently reliable. But whether we like it or not, it seems like only a matter of time. (See Frank, 2011.)
Neuroimaging and other brain-based sources of evidence are poised to play an important role in other legal issues than lie detection. Theoretically, it may also be used to detect the difference between true and false memories, and it is already being submitted in criminal trials as evidence of whether individuals could have been in control of their actions when committing a crime. Evidence for brain pathology will have a direct bearing on whether defendants’ insanity pleas are accepted.
More philosophically, neuroscience may even come to touch on our very definitions of guilt and innocence. UC-Santa Barbara psychological scientist Michael Gazzaniga, a pioneer in split-brain research, notes that the brain-basis of decision making bears directly on questions of free will and responsibility (see “Neuroscience and the Law: An Unlikely Pair?” in this issue of the Observer). It is inevitable that neuroscience and psychological science more generally will play an ever greater role throughout our legal system in years to come.