Not Ready for Prime Time: Brain-Scan Reliability in Question

Almost from our first post, we’ve written here about developments in brain-scan technology and its applicability to criminal law (see here, here, here and here, for example). So needless to say, the past nine days have been of great interest, as the research behind neuroimaging’s claims has come into hot dispute.

Now, just because our motto is “truth, justice and the scientific method,” that doesn’t make us qualified to assess the merits of the underlying science. Our observations on the actual science wouldn’t be worth the pixels. But fortunately, as with most such disputes, the issue isn’t so much the data as the math — the statistical analysis being used to make sense of the data. And we’re somewhat confident that we can at least report on such issues without getting them too wrong.

So briefly what’s going on is this:

First, lots of neuroimaging papers out there, some very influential, see apparent connections between brain activity at point X and mental state A. But what are the odds your reading of X was just a fluke, and the real spot is somewhere else, over at Z? If you do enough tests, you’re going to see X every now and then just by chance. So you have to figure out what the chances are that X would be a random result, instead of the real thing, and apply that correction to your statistical analysis. As it happens, however, for a long time the neuroimaging folks weren’t using an accurate correction. Instead, they were applying a lax rule-of-thumb that didn’t really apply. It’s since been shown that using the lax math can result in apparent connections to variables that didn’t even exist at the time.

On top of all that, as neuroscientist Daniel Bor mentions in his excellent (and much more detailed) discussion here, there’s reason to suspect that a number of prominent papers may have had their numbers fudged. With the subjectivity of judgment involved, and the vastness of the data being analyzed, it’s easy for even an honest researcher’s bias to affect the results. With advancement hinging on publication, there is also a motive for active massaging of the numbers to get a publishable result — a temptation to which Bor says some do succumb.

The upshot is that a heck of a lot of the papers on which this growing field are based… are not as reliable as was once thought.

That’s not good for scientists, who waste a lot of time and money trying to replicate unrepeatable results. It’s not good for patients who might be mis-diagnosed because of a connection that wasn’t really there. And for the purposes of this blog’s subject matter, it’s not good for defendants or law enforcers who may be undermined by bad science. It’s bad enough that certain pseudosciences are still used to put people in jail, despite their unreliability, simply because judges have been calling them reliable since Victorian times. It’s worse still if new sciences, badly understood and wrongly applied, help convict the wrong people.

Some courts have begun showing a willingness to use neuroimaging as evidence, if only in civil cases. Until the science shakes out, however, that trend needs to to be put on hold. The fMRI lie detectors and recidivism predictors still belong to the world of science fiction, for the moment. Let’s not start using them — especially not to deprive people of their lives or liberty! — until we’re certain that they’re based on science fact.

 

Tags: , , , ,

Get a Trackback link

2 Comments

  1. Christopher Arena, March 17, 2012:

    A good book that explores the legal implications of this while fiction, but nonetheless I thought was very well written was “The Fourth K” by Mario Puzo. I highly suggest it.

  2. Anna, April 21, 2012:

    Although brain scans as part of a clinical psychiatric workup might be playing in prime time on some TV infomercials, brain imaging experts say we’re not quite there yet.

    As editor Dr. Robert Freedman notes in last month’s American Journal of Psychiatry, after a string of letters on the subject: ‚ÄúCommercialization of a diagnostic test, even if the underlying procedure such as brain imaging or DNA analysis is approved for human use, strongly indicates to physicians and families that the test adds significant new information to guide clinical judgment. We have published this exchange of letters as part of our responsibility to readers to point out when a procedure may lack sufficient evidence to justify its widespread clinical use

Leave a comment