Published
Reading time
1 min read
Screen captures of AI Incident Database, a searchable collection of reports on the technology’s missteps

A new database tracks failures of automated systems including machine learning models.

What’s new: The Partnership on AI, a nonprofit consortium of businesses and institutions, launched the AI Incident Database, a searchable collection of reports on the technology’s missteps. Examples include a gender-biased recruiting system, a worrisome recommender algorithm for children, and face recognition that led to wrongful arrests.

How it works: Users can submit descriptions of incidents based on media reports. Editors determine whether to include a given report as a new incident or an addition to a previously reported one.

  • The database currently includes 1,174 unique articles covering 77 incidents, project lead Sean McGregor told The Batch.
  • Users can query the archive using keywords and narrow searches by story source, author, and submitter.
  • The database’s definition of AI includes machine learning as well as symbolic systems and deterministic algorithms, such as the flight control system that contributed to deadly crashes of two Boeing 737 Max aircraft.

Behind the news: Some independent researchers maintain similar lists of AI misfires. Those efforts, however, are not as comprehensive nor as easy to search.

Why it matters: AI failures can cause real harm. To avoid them, we need to learn from past mistakes.

We’re thinking: Incident reports are a well established tool in industries like aviation and cybersecurity. Keeping track of which systems failed, and how and when they did, is just as crucial in AI. The Partnership on AI’s vetting process should help to ensure that incident reports represent genuine problems rather than cherry-picked cases in which AI made a headline-grabbing mistake on a single input example.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox