Automating Justice

Published
Reading time
1 min read
Diagram showing causes of measured frequency of driving at night

AI tools that help police and courts make decisions about detention, probation, and sentencing have “serious shortcomings," an AI industry consortium warns.

What’s new: The Partnership on AI examined the use of risk-assessment software in criminal justice throughout the U.S. In a new report, it outlines 10 requirements of such systems that are “largely unfulfilled” in the current crop. The organization's requirements include:

  • Statistical bias must be measured and mitigated.
  • Predictions must be easy to interpret.
  • Tools must include confidence estimates with their predictions.
  • Designs, architectures, and training data must be open to review.
  • Output must be reproducible to enable meaningful challenges.

Behind the news: U.S. authorities increasingly use, and in some cases mandate the use of, automated systems. The aim is to reduce costs and increase rigor in decision-making. Yet such tools have been shown to produce invalid or biased results. They’re used without oversight, largely by people with little technical training.

We’re thinking: The government’s reliance on predictive software doesn’t stop with criminal justice. It's used in child welfare, education, public health, and elsewhere, and it has come up short time and time again. Such tools need to be evaluated with far more technical rigor. The new guidelines are a good place to start.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox