How does a judge decide whether a criminal defendant be required to await trial in jail rather than at home? In a recent study, computer scientists and economists from Cornell, Harvard, Stanford, and the University of Chicago collaborated to develop an algorithm that could significantly improve judges’ ability to make that call – either reducing crime or the number of people stuck waiting in jail.
The researchers estimate that their algorithm’s predictions could cut crime by defendants by as much as 25 percent without changing the number of people waiting in jail. It can alternatively be implemented to reduce the jail population awaiting trial by nearly 42 percent, while leaving the crime rate by defendants unchanged. Importantly, such gains can be had while also considerably reducing the number of African-Americans and Hispanics in jail.
Every year in the United States, police arrest over 10 million people. Shortly after arrest, defendants appear at a pre-trial bail hearing. Court judges can decide to release the defendant, set a bail amount that the defendant must post in order to go free, or to detain the defendant without bail. These hearings are not about determining whether the person is guilty or what the appropriate punishment is for the alleged crime. Judges are asked instead to carry out a narrowly defined task: decide where the defendant will spend the pre-trial period based on a prediction of whether the defendant, if released, would fail to appear in court or be arrested for a new crime.
These cases typically take several months to resolve. The judge’s decision affects both the defendant and society as a whole.
The researchers who developed the algorithm used a dataset of 758,027 defendants who were arrested in New York City between 2008 and 2013. In consequence of the small scope of decisions the judge has the option of making, researchers were able to create an algorithm to predict the risk a defendant poses if released back into society. The algorithm accounts for many factors, such as prior rap sheet and the outcome of releasing others who have committed the current crime, then returns a prediction of the defendant’s crime risk to society.
Through empirical analysis, many of the defendants flagged by the algorithm as high risk were treated by the judge as if they were low risk. The algorithm implemented on judges’ decisions suggest that judges are not simply setting a high threshold for detention but instead are mis-ranking defendants. These mis-rankings imply potentially large welfare gains from using algorithmic predictions in the ranking and release process.
The algorithm does not account for any unobservable details seen by judges, which may imply biased results. For instance, a judge may care about racial equity or he may have a more complex set of preferences over different kinds of crimes than assumed by the researchers who developed the algorithm.
Using a dataset containing information on 151,461 felony defendants arrested in the United States from 1990 to 2009, the algorithm displayed that these details only observed by the judge, and not considered by the program, only exacerbate crime rate and the local jail population.
An appropriately done re-ranking policy can reduce crime and jail populations while simultaneously reducing racial disparities. Furthermore, the findings show reductions in all crime categories, including the most serious violent crimes.
Why are judges mis-predicting the crime risk of a defendant? A judge’s internal state (i.e. mood) at the time of the hearing could cause him to deviate from the prediction. Additionally, a judge may place too much emphasis on specific features of the case, such as the defendant’s appearance.
After running many policy simulations, the algorithm suggests large potential gains to be had if its predictions are used to make release decisions. The algorithm can drastically lower crime rate, decrease jail populations, and minimize racial inequities. If the machine learning technique is used in our courts, we can benefit socially and economically by eliminating judges’ biases and making the process more efficient.
It is imperative that aggressive lobbying efforts are taken in order for legislation to be passed, as many judges may feel threatened by the algorithm. Of course the algorithm occasionally produces faulty predictions, so a judge is still necessary to make the final release decision. The algorithm should not be used to displace a judge from his current job, but rather it should serve to assist the judge in making a fair, unbiased decision.
Implementing a policy which integrates machine learning into pre-trial bail decisions would increase the welfare of all citizens. We would see reductions in all categories of crime while also significantly reducing the percentage of African-Americans and Hispanics in jail.