Posts Tagged ‘scientific evidence’

Not Ready for Prime Time: Brain-Scan Reliability in Question

Tuesday, March 13th, 2012

Almost from our first post, we’ve written here about developments in brain-scan technology and its applicability to criminal law (see here, here, here and here, for example). So needless to say, the past nine days have been of great interest, as the research behind neuroimaging’s claims has come into hot dispute.

Now, just because our motto is “truth, justice and the scientific method,” that doesn’t make us qualified to assess the merits of the underlying science. Our observations on the actual science wouldn’t be worth the pixels. But fortunately, as with most such disputes, the issue isn’t so much the data as the math — the statistical analysis being used to make sense of the data. And we’re somewhat confident that we can at least report on such issues without getting them too wrong.

So briefly what’s going on is this:

First, lots of neuroimaging papers out there, some very influential, see apparent connections between brain activity at point X and mental state A. But what are the odds your reading of X was just a fluke, and the real spot is somewhere else, over at Z? If you do enough tests, you’re going to see X every now and then just by chance. So you have to figure out what the chances are that X would be a random result, instead of the real thing, and apply that correction to your statistical analysis. As it happens, however, for a long time the neuroimaging folks weren’t using an accurate correction. Instead, they were applying a lax rule-of-thumb that didn’t really apply. It’s since been shown that using the lax math can result in apparent connections to variables that didn’t even exist at the time.

On top of all that, as neuroscientist Daniel Bor mentions in his excellent (and much more detailed) discussion here, there’s reason to suspect that (more…)

A Neat Primer on Neuroscience and Criminal Law

Monday, November 7th, 2011

 

One of our favorite topics here at the Criminal Lawyer has been the interaction of brain science and criminal law. So it’s with a pleased tip of the hat to Mark Bennett that we have the video linked above, an excellent summary of modern neuroscience as it applies to deep policies of our jurisprudence — Culpability, free will, the purposes of punishment, and how (or whether) to punish. The lecture was given about a year and a half ago by David Eagleman, a neuroscientist with a gift for explaining the stuff to non-scientists like us.

Most popularized science is weighed down with histories of how we got here, rather than discussions of where “here” is and where we might be going next. It’s a necessity, but unlike most popularizers Eagleman manages to cover that ground in just the first half of the lecture, rather than the more usual first 80%. So if you want to cut to the chase, you can skip to around the 15-minute mark. We enjoyed watching it all the way through, however. Once he gets going, he neatly and clearly presents ideas that many should find challenging — not because they undermine criminal jurisprudence, but because they challenge much that it merely presumes.

One particularly challenging idea of his is that, as we understand more and more how the brain works, and especially the smaller and smaller role that free will plays in our actions, the less focused on culpability we should be. Rather than focusing on whether or not an individual was responsible for a criminal act, the law should instead care about his future risk to society. If he’s going to be dangerous, then put him in jail to protect us from him, instead of as a retroactive punishment for a crime that may never happen again. The actuarial data are getting strong enough to identify reasonably-accurate predictors of recidivism, so why not focus on removing the likely recidivists and rehabilitating the rest?

Of course, as we mentioned the other day, there’s an inherent injustice when you punish someone for acts they have not yet committed, just because there’s a statistical chance that they might do so at some point in the future. That kind of penalty must be reserved for those who have actually demonstrated themselves to be incorrigible, those who reoffend as soon as they get the chance. Punishment must always be backwards-looking, based on what really happened, and not on what may come to pass.

We have quibbles with some other points he makes, as we always do when people from other disciplines discuss the policy underpinnings of criminal jurisprudence. But on the whole, this is a worthwhile watch, and we’d like to hear what you think of it.

Using Neuroscience to Gauge Mens Rea?

Monday, October 31st, 2011

Over at Edge, in a short video, we get an intriguing look at criminal justice from the perspective of neurological science.

Put all this together, as you can see here, and we discover little areas that are brighter than others. And this is all now easily done, as everyone knows, in brain imaging labs. The specificity of actually combining the centers (where information gets processed) with the actual wiring to those centers has been a very recent development, such that it can be done in humans in vivo, which is to say, in your normal college sophomore. We can actually locate their brain networks, their paths: whether they have a certain kind of connectivity, whether they don’t, and whether there may be an abnormality in them, which leads to some kind of behavioral clinical syndrome.

In terms of the Neuroscience and Justice Program, all this leads to the fact that that’s the defendant. And how is neuroscience supposed to pull this stuff together and speak to whether someone is less culpable because of a brain state?

Then you say, well, okay, fine. But then you go a little deeper and you realize, well, this brain is a very complicated thing. It works on many layers from molecules up to the cerebral cortex; it works on different time scales; it’s processing with high frequency information, low frequency information. All of this is, in fact, then changing on a background of aging and development: The brain is constantly changing.

How do you tie this together to capture what someone’s brain state might be at a particular time when a criminal act was performed? And I should have said it more clearly — most of this project was carried out asking, “Is there going to be neuroscience evidence that’s going to make various criminal defendants less culpable for their crime?”

Well, probably not. Even if this were to become reality — which it isn’t, yet — the whole focus of mens rea culpability is what the defendant’s mental state was at the time he committed the act. Even if police officers were equipped with infallible handheld brain scanners, so they could get a mental reading at the moment of arrest (and oh, the fascinating Fourth Amendment issues there!), the moment of the crime is past. The reading is not evidence of what the brain was doing five days ago, or even five minutes ago.

And at any rate, it’s not usable science yet. So why bother thinking about it now?

To his credit, the speaker, neuroscientist Michael Gazzaniga, admits as much.

Now, the practicing lawyer asks “is this thing useful, can we use it tomorrow? Can we use it the next day? Can’t? Out. Next problem.” So, after four years of this I realize, look, the fact of the matter is that from a scientific point of view, the use of sophisticated neuroscientific information in the courtroom is problematic at the present.

But then he says “it will be used in powerful ways in our lifetime.” What powerful ways? Mainly the ability to show that someone simply couldn’t have thought a certain way, because his brain doesn’t work that way. This defendant shouldn’t be punished like a normal adult, because his brain isn’t wired like a normal adult, and he could not have had the same mens rea as one would otherwise expect under the circumstances. Research is showing that children and teenagers are wired differently, as well, which could affect juvenile justice.

That’s useful for the defense. It could be a valuable tool in raising defenses showing that mens rea was lacking, because it couldn’t have existed. Not useful for prosecutors, more than showing that it was just as theoretically possible as for any normal human, which is sort of presumed for everyone anyway. So yay for science.

Another way it’s expected to be useful, however, is preventing future crimes. Stopping the next mass-murderer before he actually starts shooting kids on campus and whatnot. Of course, we immediately get creeped out the second anyone (more…)

Lie-Detecting MRI to be Used at Trial?

Thursday, May 6th, 2010

brain scan

We’ve written about the lie-detector uses of fMRI exams before (see here and here).

Now it looks like Brooklyn attorney David Zevin is trying to get it introduced for the first time in a real life court case. (The previous attempt, aimed at using it during sentencing in a San Diego case, was later withdrawn.) It’s an employer-retaliation case, which has devolved into a “he-said/she-said stalemate.” Zevin’s client says she stopped getting good assignments after she complained about sexual harassment. A co-worker says he heard the supervisor give that order, and the supervisor says he never did. So at Zevin’s request, the co-worker underwent an fMRI to see if he’s telling the truth when he says he heard that order.

Needless to say, there is opposition to letting this kind of evidence come in. There’s a pretty good discussion of the whole thing, believe it or not, over at Wired.

(H/T Neatorama)

[P.S. - We were almost about to type something like "We find ourselves strangely attracted to these kinds of stories. But we understand if you may be repulsed." Fortunately, we have refrained from doing anything like that. You're welcome.]

First Attempt to Admit MRI Lie Detector Evidence in Court

Wednesday, March 18th, 2009

brain scan

In October, we reported that functional magnetic resonance imaging (better known as fMRI) is being touted as an honest-to-goodness lie detector. Unlike a polygraph, which required interpretation of physical bodily reactions, an fMRI looks at real-time brain activity to see if brain areas associated with lying are activated during any given answer.

The issue, of course, was whether such evidence would be admissible in court. Polygraphs aren’t admissible (except in New Mexico) because they’re more art than science. But fMRI is all science, and brain scans are already widely admissible at sentencing. They are now de rigeur in capital cases, and the Supreme Court based its ruling precluding execution of adolescents on brain scan evidence.

When we wrote about it, the issue was purely hypothetical. Nobody had yet tried to introduce such evidence in court. But now, a court in San Diego is going to have to decide that very issue.

The case is a child protection hearing. The defendant is a parent accused of committing sexual abuse. Defense counsel is seeking to introduce fMRI evidence for the purpose of proving that the defendant’s claims of innocence were not lies.

If admitted, this will be the first time fMRI evidence will be used in an American court.

The fMRI in this case was performed by a San Diego company with the somewhat uninspiring name “No Lie MRI.” The company’s name isn’t so much an issue, however, as the actual reliability of these tests on an individual basis.

Although general regions are known to be associated with lying, logic, decision making, etc., their specific location in each individual varies. So some baseline analysis would be required for any person, so that his brain activity during questioning can be compared to a valid exemplar of his own actual brain.

fMRI basically measures oxygen levels in the brain’s blood vessels. When a part of the brain is being used, that part of the brain gets more blood. Studies have indicated that, when someone lies, more blood is sent to the ventrolateral area of the prefrontal cortex.

Only a few studies have been done on how accurate fMRI is at identifying specific lies, though their figures range from 76% to 90% accuracy. (For more info, see Daniel Langleben’s paper Detection of Deception with fMRI: Are we there yet? Mr. Langleben owns the technology licensed by No Lie MRI.) Ed Vul of MIT’s Kanwisher Lab told Wired.com that it’s too easy to make fMRI data inaccurate, because a defendant who knows what he’s doing can game the procedure too easily.

Of course, the big challenge to the defense in this case will be establishing that fMRI lie detection is generally accepted within the relevant scientific community. As with any other novel scientific evidence, if the relevant community is defined narrowly enough, it can come in. The trick would be in determining how narrow the relevant scientific community is in this case. If it includes researchers like Mr. Vul, for example, the defense is going to have a hard time. Even Mr. Langelben, who owns the technology used here, is on record saying that not enough clinical testing has been done to establish how reliable it really is.

We predict that the evidence will not be admitted. Down the road, sure, this stuff will come in on both sides. But right now it’s too new. Courts just don’t go out on a limb for truly novel evidence like this.

And besides, they’re trying to admit it to prove the truth of the defendant’s own statement. The issue is not whether he was lying when he declared that he believed himself to be innocent, however. The issue is whether he committed the acts of which he is accused. Whether he thinks he did or not isn’t really the point. It might be relevant at the sentencing phase of a criminal trial, but not at the fact-finding phase here.