Posts Tagged ‘fmri’

More on Brain Scans – Can They Tell Whether You’ll Get Off Lightly?

Tuesday, April 3rd, 2012

With a hat tip to our Uncle Ralph, here’s a link to yet another fMRI study bearing on criminal law. Makiko Yamada and colleagues have published in Nature Communications their study “Neural Circuits in the Brain that are Activated when Mitigating Criminal Sentences.”

The researchers asked people to review the facts underlying 32 hypothetical murder convictions. Half of them were designed to elicit sympathy for the convicted murderer, the other half to elicit no sympathy. The test subjects were told that each murderer had been given a 20-year sentence, and they were asked to modify the sentences. Unlike previous studies, there was no question as to guilt or innocence — the only issue was whether the sentence should be more or less than 20 years under the circumstances. A functional MRI scanned their brains to see what neurons were firing as they made their decisions.

The question intrigued the researchers because such decisions are not only high-stakes, but also because one must first have an emotional reaction, and then convert it into a cold quantification — the number of years of the sentence.

After crunching all the numbers, there appeared to be a strong correlation between activity in the portions of the brain highlighted in the image above, and reduced sentences.

To their credit, the researchers really don’t conclude any more than that — that certain brain areas seem to be involved in decisionmaking influenced by sympathy. And someone who’s more likely to be sympathetic is also more likely to have more activity in those neurons.

But they do note that this raises other questions — such as to what extent (more…)

Not Ready for Prime Time: Brain-Scan Reliability in Question

Tuesday, March 13th, 2012

Almost from our first post, we’ve written here about developments in brain-scan technology and its applicability to criminal law (see here, here, here and here, for example). So needless to say, the past nine days have been of great interest, as the research behind neuroimaging’s claims has come into hot dispute.

Now, just because our motto is “truth, justice and the scientific method,” that doesn’t make us qualified to assess the merits of the underlying science. Our observations on the actual science wouldn’t be worth the pixels. But fortunately, as with most such disputes, the issue isn’t so much the data as the math — the statistical analysis being used to make sense of the data. And we’re somewhat confident that we can at least report on such issues without getting them too wrong.

So briefly what’s going on is this:

First, lots of neuroimaging papers out there, some very influential, see apparent connections between brain activity at point X and mental state A. But what are the odds your reading of X was just a fluke, and the real spot is somewhere else, over at Z? If you do enough tests, you’re going to see X every now and then just by chance. So you have to figure out what the chances are that X would be a random result, instead of the real thing, and apply that correction to your statistical analysis. As it happens, however, for a long time the neuroimaging folks weren’t using an accurate correction. Instead, they were applying a lax rule-of-thumb that didn’t really apply. It’s since been shown that using the lax math can result in apparent connections to variables that didn’t even exist at the time.

On top of all that, as neuroscientist Daniel Bor mentions in his excellent (and much more detailed) discussion here, there’s reason to suspect that (more…)

A Neat Primer on Neuroscience and Criminal Law

Monday, November 7th, 2011

 

One of our favorite topics here at the Criminal Lawyer has been the interaction of brain science and criminal law. So it’s with a pleased tip of the hat to Mark Bennett that we have the video linked above, an excellent summary of modern neuroscience as it applies to deep policies of our jurisprudence — Culpability, free will, the purposes of punishment, and how (or whether) to punish. The lecture was given about a year and a half ago by David Eagleman, a neuroscientist with a gift for explaining the stuff to non-scientists like us.

Most popularized science is weighed down with histories of how we got here, rather than discussions of where “here” is and where we might be going next. It’s a necessity, but unlike most popularizers Eagleman manages to cover that ground in just the first half of the lecture, rather than the more usual first 80%. So if you want to cut to the chase, you can skip to around the 15-minute mark. We enjoyed watching it all the way through, however. Once he gets going, he neatly and clearly presents ideas that many should find challenging — not because they undermine criminal jurisprudence, but because they challenge much that it merely presumes.

One particularly challenging idea of his is that, as we understand more and more how the brain works, and especially the smaller and smaller role that free will plays in our actions, the less focused on culpability we should be. Rather than focusing on whether or not an individual was responsible for a criminal act, the law should instead care about his future risk to society. If he’s going to be dangerous, then put him in jail to protect us from him, instead of as a retroactive punishment for a crime that may never happen again. The actuarial data are getting strong enough to identify reasonably-accurate predictors of recidivism, so why not focus on removing the likely recidivists and rehabilitating the rest?

Of course, as we mentioned the other day, there’s an inherent injustice when you punish someone for acts they have not yet committed, just because there’s a statistical chance that they might do so at some point in the future. That kind of penalty must be reserved for those who have actually demonstrated themselves to be incorrigible, those who reoffend as soon as they get the chance. Punishment must always be backwards-looking, based on what really happened, and not on what may come to pass.

We have quibbles with some other points he makes, as we always do when people from other disciplines discuss the policy underpinnings of criminal jurisprudence. But on the whole, this is a worthwhile watch, and we’d like to hear what you think of it.

Lie-Detecting MRI to be Used at Trial?

Thursday, May 6th, 2010

brain scan

We’ve written about the lie-detector uses of fMRI exams before (see here and here).

Now it looks like Brooklyn attorney David Zevin is trying to get it introduced for the first time in a real life court case. (The previous attempt, aimed at using it during sentencing in a San Diego case, was later withdrawn.) It’s an employer-retaliation case, which has devolved into a “he-said/she-said stalemate.” Zevin’s client says she stopped getting good assignments after she complained about sexual harassment. A co-worker says he heard the supervisor give that order, and the supervisor says he never did. So at Zevin’s request, the co-worker underwent an fMRI to see if he’s telling the truth when he says he heard that order.

Needless to say, there is opposition to letting this kind of evidence come in. There’s a pretty good discussion of the whole thing, believe it or not, over at Wired.

(H/T Neatorama)

[P.S. – We were almost about to type something like “We find ourselves strangely attracted to these kinds of stories. But we understand if you may be repulsed.” Fortunately, we have refrained from doing anything like that. You’re welcome.]

First Attempt to Admit MRI Lie Detector Evidence in Court

Wednesday, March 18th, 2009

brain scan

In October, we reported that functional magnetic resonance imaging (better known as fMRI) is being touted as an honest-to-goodness lie detector. Unlike a polygraph, which required interpretation of physical bodily reactions, an fMRI looks at real-time brain activity to see if brain areas associated with lying are activated during any given answer.

The issue, of course, was whether such evidence would be admissible in court. Polygraphs aren’t admissible (except in New Mexico) because they’re more art than science. But fMRI is all science, and brain scans are already widely admissible at sentencing. They are now de rigeur in capital cases, and the Supreme Court based its ruling precluding execution of adolescents on brain scan evidence.

When we wrote about it, the issue was purely hypothetical. Nobody had yet tried to introduce such evidence in court. But now, a court in San Diego is going to have to decide that very issue.

The case is a child protection hearing. The defendant is a parent accused of committing sexual abuse. Defense counsel is seeking to introduce fMRI evidence for the purpose of proving that the defendant’s claims of innocence were not lies.

If admitted, this will be the first time fMRI evidence will be used in an American court.

The fMRI in this case was performed by a San Diego company with the somewhat uninspiring name “No Lie MRI.” The company’s name isn’t so much an issue, however, as the actual reliability of these tests on an individual basis.

Although general regions are known to be associated with lying, logic, decision making, etc., their specific location in each individual varies. So some baseline analysis would be required for any person, so that his brain activity during questioning can be compared to a valid exemplar of his own actual brain.

fMRI basically measures oxygen levels in the brain’s blood vessels. When a part of the brain is being used, that part of the brain gets more blood. Studies have indicated that, when someone lies, more blood is sent to the ventrolateral area of the prefrontal cortex.

Only a few studies have been done on how accurate fMRI is at identifying specific lies, though their figures range from 76% to 90% accuracy. (For more info, see Daniel Langleben’s paper Detection of Deception with fMRI: Are we there yet? Mr. Langleben owns the technology licensed by No Lie MRI.) Ed Vul of MIT’s Kanwisher Lab told Wired.com that it’s too easy to make fMRI data inaccurate, because a defendant who knows what he’s doing can game the procedure too easily.

Of course, the big challenge to the defense in this case will be establishing that fMRI lie detection is generally accepted within the relevant scientific community. As with any other novel scientific evidence, if the relevant community is defined narrowly enough, it can come in. The trick would be in determining how narrow the relevant scientific community is in this case. If it includes researchers like Mr. Vul, for example, the defense is going to have a hard time. Even Mr. Langelben, who owns the technology used here, is on record saying that not enough clinical testing has been done to establish how reliable it really is.

We predict that the evidence will not be admitted. Down the road, sure, this stuff will come in on both sides. But right now it’s too new. Courts just don’t go out on a limb for truly novel evidence like this.

And besides, they’re trying to admit it to prove the truth of the defendant’s own statement. The issue is not whether he was lying when he declared that he believed himself to be innocent, however. The issue is whether he committed the acts of which he is accused. Whether he thinks he did or not isn’t really the point. It might be relevant at the sentencing phase of a criminal trial, but not at the fact-finding phase here.

Thought Police?

Monday, October 20th, 2008

brain scan

Guilt or innocence, one might say, is all in the mind. After all, there are very few crimes that can be committed without the requisite mens rea, or mental state. If we’re going to punish someone, their acts cannot have been mere accident. We want to know that they had some knowledge that their actions could cause harm, and we want that awareness to be sufficiently high as to require punishment.

The standard criminal levels of mens rea are “negligence” (you ought to have known bad things could happen), “recklessness” (you had good reason to believe that bad things would probably happen), “knowledge” (bad things were probably going to happen), and “intent” or “purpose” (you wanted bad things to happen). If your foot kicks someone in the ribs while you’re falling downstairs, you’re not a criminal. But if you kick someone in the ribs because you don’t like them, then society probably wants to punish you.

We cannot know what anyone was thinking when they did something, however. So we rely on jurors to use their common sense to figure out what an accused must have been thinking at the time.

In recent years, however, there have been enormous advances in neuroscience. Brain scans, the software that processes the data, and good science have approached levels that would have been considered science fiction as recently as the Clinton years. Experts in the field can see not only how the brain is put together, but also what an individual brain is doing in real time. Experimental data show which parts of the brain are active when people are thinking certain things, with good detail.

Functional magnetic resonance imaging (fMRI), in particular, can act as a super lie-detector. Instead of measuring someone’s perspiration and heart rate while they answer questions during a polygraph exam, fMRI looks at actual real-time brain activity in areas having to do with logic, making decisions, perhaps even lying. Experimental data of large groups is pretty good at identifying what parts of the brain are associated with different kinds of thinking.

Every brain is slightly different, of course. Brain surgeons have to learn the individual brain they’re operating on before they start cutting. So general group data don’t translate to an individual person 100%. So any lie-detector use for fMRI would have to require some baseline analysis before proceeding to the important questions.

The issue is whether it will be admissible in court. Polygraph tests generally aren’t admissible, because they’re more an art than a science. But fMRI is all science. In addition, brain scans are already widely admissible for the purpose of reducing a sentence because the defendant had damage to his brain. As forensic neuroscience expert Daniell Martell told the New York Times in 2007, brain scans are now de rigeur in capital cases. In Roper v. Simmons, the Supreme Court, ruling that adolescents cannot be executed, allowed brain scan evidence for the purpose of showing adolescent brains really are different.

Outside the United States, brain scans have already begun to be used by the prosecution to show guilt. In India, a woman was recently convicted of murdering her ex-boyfriend with the admission of brainwave-based lie detection. There was other evidence of guilt as well, including the fact that she admitted buying the poison that killed him. But the brainwave analysis was admitted.

There are deeper policy issues here. Is reading someone’s brain activity more like taking a blood sample, or more like taking a statement? The Miranda rule is there, at heart, because we do not want the government to override people’s free will, and force them to incriminate themselves out of their own mouths against their will. That’s why the fruits of a custodial interrogation are presumed inadmissible, unless the defendant first knowingly waives his rights against self-incrimination. And because the DNA in your blood isn’t something you make of your own free will, by taking a blood sample against your will the government has not forced you to incriminate yourself against your will.

So is a brain scan more like a blood sample? Is it simply taking evidence of what is there, without you consciously providing testimony against yourself? Or will it require the knowing waiver of your Fifth and Sixth Amendment rights before it can be applied?

We’re interested in your thoughts. Feel free to comment.