Posts Tagged ‘brain science’

More on Brain Scans – Can They Tell Whether You’ll Get Off Lightly?

Tuesday, April 3rd, 2012

With a hat tip to our Uncle Ralph, here’s a link to yet another fMRI study bearing on criminal law. Makiko Yamada and colleagues have published in Nature Communications their study “Neural Circuits in the Brain that are Activated when Mitigating Criminal Sentences.”

The researchers asked people to review the facts underlying 32 hypothetical murder convictions. Half of them were designed to elicit sympathy for the convicted murderer, the other half to elicit no sympathy. The test subjects were told that each murderer had been given a 20-year sentence, and they were asked to modify the sentences. Unlike previous studies, there was no question as to guilt or innocence — the only issue was whether the sentence should be more or less than 20 years under the circumstances. A functional MRI scanned their brains to see what neurons were firing as they made their decisions.

The question intrigued the researchers because such decisions are not only high-stakes, but also because one must first have an emotional reaction, and then convert it into a cold quantification — the number of years of the sentence.

After crunching all the numbers, there appeared to be a strong correlation between activity in the portions of the brain highlighted in the image above, and reduced sentences.

To their credit, the researchers really don’t conclude any more than that — that certain brain areas seem to be involved in decisionmaking influenced by sympathy. And someone who’s more likely to be sympathetic is also more likely to have more activity in those neurons.

But they do note that this raises other questions — such as to what extent (more…)

Not Ready for Prime Time: Brain-Scan Reliability in Question

Tuesday, March 13th, 2012

Almost from our first post, we’ve written here about developments in brain-scan technology and its applicability to criminal law (see here, here, here and here, for example). So needless to say, the past nine days have been of great interest, as the research behind neuroimaging’s claims has come into hot dispute.

Now, just because our motto is “truth, justice and the scientific method,” that doesn’t make us qualified to assess the merits of the underlying science. Our observations on the actual science wouldn’t be worth the pixels. But fortunately, as with most such disputes, the issue isn’t so much the data as the math — the statistical analysis being used to make sense of the data. And we’re somewhat confident that we can at least report on such issues without getting them too wrong.

So briefly what’s going on is this:

First, lots of neuroimaging papers out there, some very influential, see apparent connections between brain activity at point X and mental state A. But what are the odds your reading of X was just a fluke, and the real spot is somewhere else, over at Z? If you do enough tests, you’re going to see X every now and then just by chance. So you have to figure out what the chances are that X would be a random result, instead of the real thing, and apply that correction to your statistical analysis. As it happens, however, for a long time the neuroimaging folks weren’t using an accurate correction. Instead, they were applying a lax rule-of-thumb that didn’t really apply. It’s since been shown that using the lax math can result in apparent connections to variables that didn’t even exist at the time.

On top of all that, as neuroscientist Daniel Bor mentions in his excellent (and much more detailed) discussion here, there’s reason to suspect that (more…)

The Science of Ethical Relativism?

Wednesday, November 9th, 2011

If you’re looking to start an argument with a loved one, or a fight, moral relativism is an excellent way to start. Specifically the position that, because different people do have different ideas of what is and is not ethical, your loved one’s morals are not true. Nor are they any more valid than the morals of someone who thinks very differently. And in fact, all ethical positions are equally valid and deserving of respect. This position strikes many as not only absurd but insulting, which may lead them to strike you.

After all, just because someone thinks that they’re doing the right thing, that doesn’t make it so, right? A fanatic who kills random bystanders in order to make a point may think it’s the height of propriety, and others may agree with him, but they’re wrong. Right? There are some things that are just wrong, and a relativist position that such attitudes are as valid as any other is equally wrong. Right?

Well, we’re not going to get into all that here. What got us onto this was a report in Scientific American that you can scientifically determine whether or not someone is a relativist. That could be useful, if for no other reason than to avoid situations where someone gets punched in the nose.

The magazine reports that “a simple mental puzzle” can determine whether someone is a relativist, or “an absolutist who embraces only one ‘true’ answer to these weighty conundrums.” This is the result of a study by Geoffrey Goodwin an John Darley. You can take a sample question at the link, if you like, but it really just boils down to whether or not, in a situation with multiple possible outcomes, you think through those outcomes when arriving at your answer.

That’s really not terribly useful, unfortunately. It’s only a test of clear, orderly thinking. The kind of brains used by chess players, mechanics troubleshooting a system, computer programmers, and even the occasional lawyer. It’s sadly true that too few people do think so clearly, but plenty do. And plenty of them are moral absolutists. The ability to consider different possibilities and perspectives certainly goes well with relativism, but it is neither a prerequisite nor a cause nor a strong correlate.

Criminal law, of course, is the embodiment of absolutism. What is crime, but the codification of a society’s morality — a list of those acts that are so ethically wrong that they must be punished. It doesn’t matter if an individual or ethnic group within the society doesn’t share the codified morals — they may think it’s perfectly fine if not laudable to do a certain thing, but the law says otherwise and they’re going to be subject to it. And appearances to the contrary, many very smart people with clear logical minds are behind our criminal jurisprudence. They may have the ability to be relativists, but it’d be difficult to call them by that name.

-=-=-=-=-

Still, it’s fascinating that someone actually bothered to take a philosophical idea, and do science to it.

Originally, of course, philosophy was how we learned about the world — we thought about it and compared notes to see which ideas held up the best and made the most sense. But the scientific method has replaced philosophy. You have an idea about how the world works? Test it and see if you’re right. In recent decades, the science of the mind has gotten astonishingly good. Brain science is rapidly answering a heck of a lot of questions that used to be the sole province of philosophers. So philosophers have retreated to areas not easily testable, stuff that isn’t exactly science, but where ideas can still be floated and debated and explored. Stuff like morality, free will, etc. One thing that has been fairly constant, however, is that philosophers do not go into the lab. They do not construct double-blind experiments, perform regression analysis, or any of that. The only experiments they perform are thought experiments.

Until lately, however. Goodwin and Darley may not have constructed the best (more…)

A Neat Primer on Neuroscience and Criminal Law

Monday, November 7th, 2011

 

One of our favorite topics here at the Criminal Lawyer has been the interaction of brain science and criminal law. So it’s with a pleased tip of the hat to Mark Bennett that we have the video linked above, an excellent summary of modern neuroscience as it applies to deep policies of our jurisprudence — Culpability, free will, the purposes of punishment, and how (or whether) to punish. The lecture was given about a year and a half ago by David Eagleman, a neuroscientist with a gift for explaining the stuff to non-scientists like us.

Most popularized science is weighed down with histories of how we got here, rather than discussions of where “here” is and where we might be going next. It’s a necessity, but unlike most popularizers Eagleman manages to cover that ground in just the first half of the lecture, rather than the more usual first 80%. So if you want to cut to the chase, you can skip to around the 15-minute mark. We enjoyed watching it all the way through, however. Once he gets going, he neatly and clearly presents ideas that many should find challenging — not because they undermine criminal jurisprudence, but because they challenge much that it merely presumes.

One particularly challenging idea of his is that, as we understand more and more how the brain works, and especially the smaller and smaller role that free will plays in our actions, the less focused on culpability we should be. Rather than focusing on whether or not an individual was responsible for a criminal act, the law should instead care about his future risk to society. If he’s going to be dangerous, then put him in jail to protect us from him, instead of as a retroactive punishment for a crime that may never happen again. The actuarial data are getting strong enough to identify reasonably-accurate predictors of recidivism, so why not focus on removing the likely recidivists and rehabilitating the rest?

Of course, as we mentioned the other day, there’s an inherent injustice when you punish someone for acts they have not yet committed, just because there’s a statistical chance that they might do so at some point in the future. That kind of penalty must be reserved for those who have actually demonstrated themselves to be incorrigible, those who reoffend as soon as they get the chance. Punishment must always be backwards-looking, based on what really happened, and not on what may come to pass.

We have quibbles with some other points he makes, as we always do when people from other disciplines discuss the policy underpinnings of criminal jurisprudence. But on the whole, this is a worthwhile watch, and we’d like to hear what you think of it.

Using Neuroscience to Gauge Mens Rea?

Monday, October 31st, 2011

Over at Edge, in a short video, we get an intriguing look at criminal justice from the perspective of neurological science.

Put all this together, as you can see here, and we discover little areas that are brighter than others. And this is all now easily done, as everyone knows, in brain imaging labs. The specificity of actually combining the centers (where information gets processed) with the actual wiring to those centers has been a very recent development, such that it can be done in humans in vivo, which is to say, in your normal college sophomore. We can actually locate their brain networks, their paths: whether they have a certain kind of connectivity, whether they don’t, and whether there may be an abnormality in them, which leads to some kind of behavioral clinical syndrome.

In terms of the Neuroscience and Justice Program, all this leads to the fact that that’s the defendant. And how is neuroscience supposed to pull this stuff together and speak to whether someone is less culpable because of a brain state?

Then you say, well, okay, fine. But then you go a little deeper and you realize, well, this brain is a very complicated thing. It works on many layers from molecules up to the cerebral cortex; it works on different time scales; it’s processing with high frequency information, low frequency information. All of this is, in fact, then changing on a background of aging and development: The brain is constantly changing.

How do you tie this together to capture what someone’s brain state might be at a particular time when a criminal act was performed? And I should have said it more clearly — most of this project was carried out asking, “Is there going to be neuroscience evidence that’s going to make various criminal defendants less culpable for their crime?”

Well, probably not. Even if this were to become reality — which it isn’t, yet — the whole focus of mens rea culpability is what the defendant’s mental state was at the time he committed the act. Even if police officers were equipped with infallible handheld brain scanners, so they could get a mental reading at the moment of arrest (and oh, the fascinating Fourth Amendment issues there!), the moment of the crime is past. The reading is not evidence of what the brain was doing five days ago, or even five minutes ago.

And at any rate, it’s not usable science yet. So why bother thinking about it now?

To his credit, the speaker, neuroscientist Michael Gazzaniga, admits as much.

Now, the practicing lawyer asks “is this thing useful, can we use it tomorrow? Can we use it the next day? Can’t? Out. Next problem.” So, after four years of this I realize, look, the fact of the matter is that from a scientific point of view, the use of sophisticated neuroscientific information in the courtroom is problematic at the present.

But then he says “it will be used in powerful ways in our lifetime.” What powerful ways? Mainly the ability to show that someone simply couldn’t have thought a certain way, because his brain doesn’t work that way. This defendant shouldn’t be punished like a normal adult, because his brain isn’t wired like a normal adult, and he could not have had the same mens rea as one would otherwise expect under the circumstances. Research is showing that children and teenagers are wired differently, as well, which could affect juvenile justice.

That’s useful for the defense. It could be a valuable tool in raising defenses showing that mens rea was lacking, because it couldn’t have existed. Not useful for prosecutors, more than showing that it was just as theoretically possible as for any normal human, which is sort of presumed for everyone anyway. So yay for science.

Another way it’s expected to be useful, however, is preventing future crimes. Stopping the next mass-murderer before he actually starts shooting kids on campus and whatnot. Of course, we immediately get creeped out the second anyone (more…)

Thought Police?

Monday, October 20th, 2008

brain scan

Guilt or innocence, one might say, is all in the mind. After all, there are very few crimes that can be committed without the requisite mens rea, or mental state. If we’re going to punish someone, their acts cannot have been mere accident. We want to know that they had some knowledge that their actions could cause harm, and we want that awareness to be sufficiently high as to require punishment.

The standard criminal levels of mens rea are “negligence” (you ought to have known bad things could happen), “recklessness” (you had good reason to believe that bad things would probably happen), “knowledge” (bad things were probably going to happen), and “intent” or “purpose” (you wanted bad things to happen). If your foot kicks someone in the ribs while you’re falling downstairs, you’re not a criminal. But if you kick someone in the ribs because you don’t like them, then society probably wants to punish you.

We cannot know what anyone was thinking when they did something, however. So we rely on jurors to use their common sense to figure out what an accused must have been thinking at the time.

In recent years, however, there have been enormous advances in neuroscience. Brain scans, the software that processes the data, and good science have approached levels that would have been considered science fiction as recently as the Clinton years. Experts in the field can see not only how the brain is put together, but also what an individual brain is doing in real time. Experimental data show which parts of the brain are active when people are thinking certain things, with good detail.

Functional magnetic resonance imaging (fMRI), in particular, can act as a super lie-detector. Instead of measuring someone’s perspiration and heart rate while they answer questions during a polygraph exam, fMRI looks at actual real-time brain activity in areas having to do with logic, making decisions, perhaps even lying. Experimental data of large groups is pretty good at identifying what parts of the brain are associated with different kinds of thinking.

Every brain is slightly different, of course. Brain surgeons have to learn the individual brain they’re operating on before they start cutting. So general group data don’t translate to an individual person 100%. So any lie-detector use for fMRI would have to require some baseline analysis before proceeding to the important questions.

The issue is whether it will be admissible in court. Polygraph tests generally aren’t admissible, because they’re more an art than a science. But fMRI is all science. In addition, brain scans are already widely admissible for the purpose of reducing a sentence because the defendant had damage to his brain. As forensic neuroscience expert Daniell Martell told the New York Times in 2007, brain scans are now de rigeur in capital cases. In Roper v. Simmons, the Supreme Court, ruling that adolescents cannot be executed, allowed brain scan evidence for the purpose of showing adolescent brains really are different.

Outside the United States, brain scans have already begun to be used by the prosecution to show guilt. In India, a woman was recently convicted of murdering her ex-boyfriend with the admission of brainwave-based lie detection. There was other evidence of guilt as well, including the fact that she admitted buying the poison that killed him. But the brainwave analysis was admitted.

There are deeper policy issues here. Is reading someone’s brain activity more like taking a blood sample, or more like taking a statement? The Miranda rule is there, at heart, because we do not want the government to override people’s free will, and force them to incriminate themselves out of their own mouths against their will. That’s why the fruits of a custodial interrogation are presumed inadmissible, unless the defendant first knowingly waives his rights against self-incrimination. And because the DNA in your blood isn’t something you make of your own free will, by taking a blood sample against your will the government has not forced you to incriminate yourself against your will.

So is a brain scan more like a blood sample? Is it simply taking evidence of what is there, without you consciously providing testimony against yourself? Or will it require the knowing waiver of your Fifth and Sixth Amendment rights before it can be applied?

We’re interested in your thoughts. Feel free to comment.