Posts Tagged ‘statistics’

Statistics and the Serial Killer

Monday, January 16th, 2012

Andrei Chikatilo was serial killer who murdered at least 56 young women and children starting in 1978 until his capture in 1990. The details are as bad as one might expect, and apparently the murders and mutilations were how he achieved sexual release. His killings seemed unpredictable to investigators at the time, and even in retrospect there appears to be no clear pattern.

Now, however, UCLA mathematicians Mikhail Simkin and Vwani Roychowdhury have published a paper where they see not only a pattern, but one that is meaningful to those who might want to stop other serial killers. In their paper, “Stochastic Modeling of a Serial Killer,” published a couple of days ago, Simkin and Roychowdhury discovered that the killings fit a pattern known as a “power law distribution.” One of many kinds of statistical distribution (the bell curve being another), power law distributions are often found for out-of-the-ordinary events like earthquakes, great wealth, website popularity and the like.

First, they looked at a timeline of his killings. They saw apparently random periods of inactivity. Each time Chikatilo started killing again, however, the next murder would come soon after. And the one after that even sooner. And so on and so on until the next period of no killing.

The study doesn’t take account of the reasons for two of the longer pauses — Chikatilo’s first arrest and detention on suspicion of being the killer, and the period where the media started reporting on the investigation — but the reasons aren’t important. What’s important is being able to make some kind of sense out of the seemingly random events.

What they noticed was that, when these ever-increasing murders were plotted on a logarithmic scale, they came out in almost a straight line — indicating the possibility that a power law might be at work here. What’s more than that, they noticed that the curve’s exponent of 1.4 was pretty darn close to the 1.5 found for the power curve of epileptic seizures. What if (they wondered) the killings fit a neurological pattern? What if, like epileptic seizures, psychotic events like these killings came about when an unusually large number of neurons in the brain started firing together?

So they plugged in some givens of what is known about how neurons work, modeled on how epilepsy works. They made the model a little more realistic — seizures come unbidden when the conditions are met, but killers probably need some time to plan once their brain is ready for the next attack. Then they ran a simulation.

The simulated probabilities for the length of time between murders tracked the real-life data almost perfectly.

In other words, if you know when the last murder took place, you can calculate the probability that another killing will happen today. And the more time has passed since the last one, the less likely another will happen.

-=-=-=-=-

Fascinating stuff, but so what? The so what is that (more…)

Falling Economy, Falling Crime

Tuesday, October 4th, 2011

Endless Origami: Crime Rates

Or maybe not…

For some reason, common wisdom would have it that crime should go up when the economy is going down. Violent crime in particular. Apparently, the thinking is that less prosperity leads to increased frustration and desperation, leading to more beatings killings muggings and rapes. As if the people who otherwise would commit such crimes are less likely to do so when banks are lending and people are investing in new and bigger business ventures.

Of course, common wisdom is frequently wrong. Which is good, because as we’ve pointed out before, the economy is going to continue to suck. Europe is facing massive uncertainty in the face of its Mediterranean peoples voting themselves the treasury. Here in the U.S., the Obama administration, elected on a platform of “hope,” is doing everything in its power to kill off any hope that investment in growth would be worth the risk. Instead of ensuring the stability and predictability necessary for economic growth, the governments of Europe and the U.S. are only spreading uncertainty and worry. It is now pretty much a certainty that a double-dip recession is upon us.

But the economy just isn’t that strong an influence on crime. During the prosperous 1950s and 1980s, violent crime went through the roof. During the Great Depression and the recent Crappy Recession, violent crime plummeted. The influence of economic hardship on crime is just not that strong. It is certainly not cause-and-effect — any effect is likely limited to exacerbating the effect of those things that actually do drive up crime. And right now, those things aren’t driving crime up.

So what are those things? What factors do drive violent crime? And are they going to come back any (more…)

Ninth Circuit Bungles Math, Can the Supremes Fix It?

Tuesday, September 1st, 2009

Prosecutor's Fallacy

The “Prosecutor’s Fallacy” is one example of why we think Statistics should be a required course in college. Let’s say the police have the DNA of a rapist. Only 1 in 3,000,000 people chosen at random will match that DNA sample. Your DNA matches. At your trial, the DNA expert testifies that you have only a 1 in 3,000,000 chance of being innocent. That is not correct, however. That’s an example of the Prosecutor’s Fallacy.

Yes, there is a very small chance that someone’s DNA would match if they were innocent. But that is not the same as saying there’s a very small chance that someone is innocent if their DNA matches.

This is basic conditional probability. And if you think about it, it’s just common sense. What you’re doing is switching the conditions around, and leaving the result unchanged. You can’t expect to change the conditions and not change the result.

To illustrate with an extreme example, we drew the picture you see above. A black circle indicates a DNA match. All guilty people are going to have a DNA match, obviously. And a tiny fraction of innocent people are going to have a DNA match, as well. But if the number of innocent people is large enough, then the number of innocent people whose DNA matches could actually be larger than the number of guilty people. Someone whose DNA matches is actually more likely to be innocent in that scenario.

-=-=-=-=-

Prosecutors and DNA experts aren’t the only ones who get this wrong. Courts do, too. The Ninth Circuit recently made a hash of it in their decision in McDaniel v. Brown, which will now be one of the first cases to be heard by the Supreme Court at the start of this year’s October term.

In McDaniel v. Brown, Troy Brown was prosecuted for the alleged rape of a little girl. The facts are pretty gruesome, but irrelevant here. What’s relevant is that, at his trial, the DNA expert testified that Brown’s DNA matched the DNA in the semen found on the girl, that there was a 1 in 3,000,000 chance that someone’s DNA would match, and that therefore there was a 1 in 3,000,000 chance that Brown was innocent.

Brown got convicted. He later brought a habeas petition to the District Court. He introduced a professor’s explanation of how the prosecution had screwed up. The District Court expanded the record to include the professor’s explanation, and found that the DNA expert had engaged in the Prosecutor’s Fallacy. In part because of that (there was also a chance it could have been his brother’s DNA), the District Court found there wasn’t sufficient evidence to convict.

The government appealed to the Ninth Circuit.

Now, the Ninth is known for being touchy-feely. It’s not known for its analytical prowess. Posner, they ain’t. But they bravely tackled this statistical conundrum. And they screwed up.

In trying to deal with the prosecution’s error, the Ninth swung too far in the other direction, finding that the DNA evidence at Brown’s trial couldn’t establish guilt, period. No jury could have found Brown guilty.

So the government took it to the Supreme Court, making two arguments. One is procedural — that the habeas court shouldn’t have been able to consider the professor’s explanations, but only the trial record, in determining the sufficiency of the evidence before the jury. The other argument is that even though the chances of Brown being innocent weren’t 1 in 3,000,000 they were still pretty damn low, and the DNA evidence is still plenty sufficient.

Brown’s lawyers, to their credit, don’t seem to be arguing that the Ninth Circuit did it right. Instead of characterizing the decision below as ruling on the sufficiency of the evidence, Brown’s attorneys argue that it was really a Due Process ruling. The testimony wasn’t so much insufficient as it was incorrect. It was unreliable. This is bolstered by the fact that the Ninth Circuit ordered a new trial (which Double Jeopardy would preclude after a finding of insufficient evidence, but which is standard after a Due Process finding of unreliable evidence.)

That’s not the way the Ninth characterized its ruling, however, so Brown wisely suggested that the Supreme Court might simply kick the case back for the Circuit to explain its ruling better.

-=-=-=-=-

Oral arguments are scheduled for October 13. We haven’t made any predictions yet about the upcoming term, so we’ll start here.

We think the state will convince Chief Justice Roberts and Justices Scalia, Kennedy, Thomas, Ginsberg and Alito of the following:

(1) The Ninth Circuit improperly remanded for a new trial, which is improper after a finding of insufficiency; and

(2) At any rate, the Circuit improperly found the evidence to be insufficient, when there was plenty of evidence of guilt.

We think that Justices Stevens and Breyer (we have no clue about Sotomayor) will dissent, arguing that the jury was totally thrown by the DNA expert’s mischaracterization, that this was a Due Process violation at the very least, and that the DNA evidence probably should have been thrown out entirely, so the Ninth Circuit should be reversed and the District Court’s original ruling should be reinstated.

What are the odds that we’re really right? Who wants to do the math?