fbpx
Politics Foreign Affairs Culture Fellows Program

Automatic Justice

When it comes to letting defendants out on bail, computers make far better decisions than judges, a new study shows.
Gavel Laptop

One important decision a judge must make is whether to let a defendant out on bail until the trial. Flight risk is one major concern; most states also instruct judges to consider the threat to public safety. But what if we used computers make that decision instead of judges? We’d see dramatically better results, at least according to a new working paper published by the National Bureau of Economic Research. In fact, we could reduce the crimes these people commit by a quarter while jailing the same number of people. Alternatively, according to the study, we could reduce the number of people jailed by more than 40 percent while keeping crime rates the same.

This formulaic approach to justice, known as “risk assessment,” is becoming increasingly common. Criminal-justice officials from New Jersey to California are turning to algorithms for estimates of how likely offenders are to get into more trouble. In addition to bail, such tools can be used to make sentencing, probation, and parole decisions. (Robot juries remain far in the future.)

This development has prompted squeamish responses for a number of reasons, many of them valid. But if the new study is correct, the gains may prove too big to pass up. These tools may be the key to dramatically reducing our high incarceration rate without putting the public at risk.

Rather than relying on preexisting risk-assessment tools, the new study creates its own, using computerized “machine learning” on massive data sets compiled from real-life cases. The resulting formulas take account of various factors, including the current charge against the defendant, his priors, and whether he has failed to appear at a court date in the past. They do not consider any demographic traits except age.

The real-life judges made decisions quite different from what the algorithm would have recommended. In theory, this could happen because judges have important information that the algorithm doesn’t—but in reality the algorithm just works better, especially when it comes to identifying the riskiest cases. Incredibly, judges released about half of the defendants the algorithm considered to be in the riskiest 1 percent, and those defendants in fact committed new crimes more than half of the time. There are obvious gains in keeping these folks locked up while letting lower-risk offenders go, but for whatever reason human judges don’t seem to realize which defendants are truly the most likely to get into trouble if released.

A major concern about these algorithms is that they could be racially biased, even if they don’t directly consider race as a factor. Last year I showed that one specific accusation along these lines was overblown, though, and there is little sign of bias in the new study, at least relative to the status quo. The algorithm would have released slightly more Hispanics and slightly fewer blacks than the real-life judges did.

The authors also show that you can force the algorithm not to change the racial makeup of the jail population—or even to ensure that the same percentage of each racial group is released—without dramatically reducing its effectiveness. However, that approach is probably a lawsuit waiting to happen. In one example, the algorithm simply “stop[s] detaining black and Hispanic defendants once we have hit the exact number of each group detained by the judges, to ensure that the minority composition of the jail population is no higher under the algorithm’s release rule compared to the judges.” It’s a quota system, in other words, that would in effect set a separate risk threshold for each racial group to make sure the final numbers are politically correct.

None of this, of course, negates the legitimate criticisms of risk-assessment tools that I mentioned in last year’s post. Some of these tools are proprietary and thus can’t be contested effectively in court, raising due-process concerns. Wisconsin’s high court, for example, has upheld the use of risk assessment in sentencing, but only as one factor among many, barring the full substitution of an algorithm’s discretion for judges’—the scenario explored in the study.

And, while the study focused on simple predictors like prior offenses, other risk-assessment tools involve extensive questionnaires that raise issues of their own. One popular tool, for instance, involves asking inmates whether there are gangs and drugs in their neighborhood, criteria on which it’s arguably unfair for the state to base consequential decisions.

These questions could also allow offenders to game the system. Most criminals aren’t geniuses, but it doesn’t take a genius to figure out which answers to such questions are more likely to get you released. (The questionnaires do contain some tricks, however, to catch offenders who aren’t telling the truth.)

But it is worth emphasizing that, in the context of bail, this new study suggests that we could lock up 40 percent fewer people without changing the crime rate. That is an enormous amount of practical good—fewer lives disrupted and less public money spent. And there is little reason to think humans are significantly more reliable when it comes to sentencing, parole, or probation.

People of good faith can disagree as to what, exactly, a risk-assessment tool should take into account. But it’s becoming harder to deny the immense potential such tools hold.

Robert VerBruggen is managing editor of The American Conservative.

Advertisement

Comments

Become a Member today for a growing stake in the conservative movement.
Join here!
Join here