Unfair Outcomes Destroy Trust What could cause widespread backlash against AI?

Published
Reading time
1 min read
Doctor holding candy and kid dressed as a ghost on a wheighing scale

Will AI that discriminates based on race, gender, or economic status undermine the public’s confidence in the technology?

The fear: Seduced by the promise of cost savings and data-driven decision making, organizations will deploy biased systems that end up doing real-world damage. Systems incorporating biased algorithms or trained on biased data will misdiagnose medical patients, bar consumers from loans or insurance, deny parole to reformed convicts, or grant it to unrepentant ones.

Behind the worries: Biased implementations have raised public backlash as organizations both private and public figure out what AI can and can’t do, and how to use it properly.

  • The UK recently abandoned an algorithm designed to streamline visa applications after human rights activists sued. The plaintiffs charged that the model discriminated against people from countries with large non-white populations.
  • Financial regulators in New York last year launched an investigation into the algorithm behind Apple’s credit card. Users reported that women had received lower interest rates than men with comparable credit ratings.
  • The Los Angeles Police Department adopted systems designed to forecast crimes, but it stopped using one and promised to revamp another after determining that they were flawed. Some people identified as high-risk offenders, for instance, had no apparent history of violent crime.

How scared should you be: Many organizations are attracted by AI’s promises to cut costs and streamline operations, but they may not be equipped to vet systems adequately. The biased systems that have made headlines are just the tip of the iceberg, according to Cathy O’Neil, author of the book Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Further reports of systems prone to unfair outcomes are bound to emerge.

What to do: AI systems won’t enjoy broad public trust until we demonstrate clearly that they perform well and pose minimal risk of unintended consequences. Much work remains to be done to establish guidelines and systematically audit systems for accuracy, reliability, and fairness.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox