Posts Tagged ‘polygraph’

First Attempt to Admit MRI Lie Detector Evidence in Court

Wednesday, March 18th, 2009

brain scan

In October, we reported that functional magnetic resonance imaging (better known as fMRI) is being touted as an honest-to-goodness lie detector. Unlike a polygraph, which required interpretation of physical bodily reactions, an fMRI looks at real-time brain activity to see if brain areas associated with lying are activated during any given answer.

The issue, of course, was whether such evidence would be admissible in court. Polygraphs aren’t admissible (except in New Mexico) because they’re more art than science. But fMRI is all science, and brain scans are already widely admissible at sentencing. They are now de rigeur in capital cases, and the Supreme Court based its ruling precluding execution of adolescents on brain scan evidence.

When we wrote about it, the issue was purely hypothetical. Nobody had yet tried to introduce such evidence in court. But now, a court in San Diego is going to have to decide that very issue.

The case is a child protection hearing. The defendant is a parent accused of committing sexual abuse. Defense counsel is seeking to introduce fMRI evidence for the purpose of proving that the defendant’s claims of innocence were not lies.

If admitted, this will be the first time fMRI evidence will be used in an American court.

The fMRI in this case was performed by a San Diego company with the somewhat uninspiring name “No Lie MRI.” The company’s name isn’t so much an issue, however, as the actual reliability of these tests on an individual basis.

Although general regions are known to be associated with lying, logic, decision making, etc., their specific location in each individual varies. So some baseline analysis would be required for any person, so that his brain activity during questioning can be compared to a valid exemplar of his own actual brain.

fMRI basically measures oxygen levels in the brain’s blood vessels. When a part of the brain is being used, that part of the brain gets more blood. Studies have indicated that, when someone lies, more blood is sent to the ventrolateral area of the prefrontal cortex.

Only a few studies have been done on how accurate fMRI is at identifying specific lies, though their figures range from 76% to 90% accuracy. (For more info, see Daniel Langleben’s paper Detection of Deception with fMRI: Are we there yet? Mr. Langleben owns the technology licensed by No Lie MRI.) Ed Vul of MIT’s Kanwisher Lab told that it’s too easy to make fMRI data inaccurate, because a defendant who knows what he’s doing can game the procedure too easily.

Of course, the big challenge to the defense in this case will be establishing that fMRI lie detection is generally accepted within the relevant scientific community. As with any other novel scientific evidence, if the relevant community is defined narrowly enough, it can come in. The trick would be in determining how narrow the relevant scientific community is in this case. If it includes researchers like Mr. Vul, for example, the defense is going to have a hard time. Even Mr. Langelben, who owns the technology used here, is on record saying that not enough clinical testing has been done to establish how reliable it really is.

We predict that the evidence will not be admitted. Down the road, sure, this stuff will come in on both sides. But right now it’s too new. Courts just don’t go out on a limb for truly novel evidence like this.

And besides, they’re trying to admit it to prove the truth of the defendant’s own statement. The issue is not whether he was lying when he declared that he believed himself to be innocent, however. The issue is whether he committed the acts of which he is accused. Whether he thinks he did or not isn’t really the point. It might be relevant at the sentencing phase of a criminal trial, but not at the fact-finding phase here.

Thought Police?

Monday, October 20th, 2008

brain scan

Guilt or innocence, one might say, is all in the mind. After all, there are very few crimes that can be committed without the requisite mens rea, or mental state. If we’re going to punish someone, their acts cannot have been mere accident. We want to know that they had some knowledge that their actions could cause harm, and we want that awareness to be sufficiently high as to require punishment.

The standard criminal levels of mens rea are “negligence” (you ought to have known bad things could happen), “recklessness” (you had good reason to believe that bad things would probably happen), “knowledge” (bad things were probably going to happen), and “intent” or “purpose” (you wanted bad things to happen). If your foot kicks someone in the ribs while you’re falling downstairs, you’re not a criminal. But if you kick someone in the ribs because you don’t like them, then society probably wants to punish you.

We cannot know what anyone was thinking when they did something, however. So we rely on jurors to use their common sense to figure out what an accused must have been thinking at the time.

In recent years, however, there have been enormous advances in neuroscience. Brain scans, the software that processes the data, and good science have approached levels that would have been considered science fiction as recently as the Clinton years. Experts in the field can see not only how the brain is put together, but also what an individual brain is doing in real time. Experimental data show which parts of the brain are active when people are thinking certain things, with good detail.

Functional magnetic resonance imaging (fMRI), in particular, can act as a super lie-detector. Instead of measuring someone’s perspiration and heart rate while they answer questions during a polygraph exam, fMRI looks at actual real-time brain activity in areas having to do with logic, making decisions, perhaps even lying. Experimental data of large groups is pretty good at identifying what parts of the brain are associated with different kinds of thinking.

Every brain is slightly different, of course. Brain surgeons have to learn the individual brain they’re operating on before they start cutting. So general group data don’t translate to an individual person 100%. So any lie-detector use for fMRI would have to require some baseline analysis before proceeding to the important questions.

The issue is whether it will be admissible in court. Polygraph tests generally aren’t admissible, because they’re more an art than a science. But fMRI is all science. In addition, brain scans are already widely admissible for the purpose of reducing a sentence because the defendant had damage to his brain. As forensic neuroscience expert Daniell Martell told the New York Times in 2007, brain scans are now de rigeur in capital cases. In Roper v. Simmons, the Supreme Court, ruling that adolescents cannot be executed, allowed brain scan evidence for the purpose of showing adolescent brains really are different.

Outside the United States, brain scans have already begun to be used by the prosecution to show guilt. In India, a woman was recently convicted of murdering her ex-boyfriend with the admission of brainwave-based lie detection. There was other evidence of guilt as well, including the fact that she admitted buying the poison that killed him. But the brainwave analysis was admitted.

There are deeper policy issues here. Is reading someone’s brain activity more like taking a blood sample, or more like taking a statement? The Miranda rule is there, at heart, because we do not want the government to override people’s free will, and force them to incriminate themselves out of their own mouths against their will. That’s why the fruits of a custodial interrogation are presumed inadmissible, unless the defendant first knowingly waives his rights against self-incrimination. And because the DNA in your blood isn’t something you make of your own free will, by taking a blood sample against your will the government has not forced you to incriminate yourself against your will.

So is a brain scan more like a blood sample? Is it simply taking evidence of what is there, without you consciously providing testimony against yourself? Or will it require the knowing waiver of your Fifth and Sixth Amendment rights before it can be applied?

We’re interested in your thoughts. Feel free to comment.