That Kid Looks Like a Criminal Conarc face recognition contained children's personal info.

Published
Reading time
2 min read
Security cameras with face recognition inside a building in Argentina

In Argentina, a municipal face recognition system could misidentify children as suspected lawbreakers.

What’s new: Authorities in Buenos Aires are scanning subway riders’ faces to find offenders in a database of suspects — but the system mixes criminal records with personal information about minors, according to Human Rights Watch. The report follows a lawsuit against the city filed by civil rights activists earlier in the year.

How it works: The system uses two databases. The first, called Conarc, contains details about people who have outstanding arrest warrants, including names, ages, and national ID numbers. It matches these records with faces in a second database that contains pictures of Argentine citizens. The system alerts police when it recognizes a suspect in the crowd.

  • Human Rights Watch found 166 minors listed in Conarc, despite international rules protecting the privacy of juveniles. Most were between 16 and 17 years old, but some were as young as one. The group also found errors, such as children listed multiple times under different ID numbers, and questionable charges, including a three-year-old wanted for aggravated robbery.
  • Last year, the United Nations concluded that Conarc holds information about dozens of children in violation of the international Convention of the Rights of the Child, which Argentina ratified in 1990. Government officials denied that Conarc contains information about children but admitted that it contains errors.
  • Officials credit the system with helping to arrest nearly 10,000 fugitives in the city. In roughly one month last year, it alerted police to 595 suspects, five of whom were misidentified. One person, who was not in the criminal database but shared the same name as a suspect in an armed robbery case, was arrested and held for six days.
  • The system is based on technology from NTechLab, a Russian AI company with customers in 20 different countries. The company claims a 98 percent accuracy rate.

Why it matters: Buenos Aires’ system relies on a database that appears to violate international human rights law, making children vulnerable to false arrests and other mishaps. Moreover, studies have shown that current face recognition technology is highly unreliable when used on children.

We’re thinking: The issues swirling around this system highlight the importance of clean, compliant data in machine learning applications: An error-free system that complied with legal requirements may still result in false arrests, but it would be defensible on grounds that it helped bring criminals to justice. The system also illustrates the pressing need to take extra care with machine learning models that bear on social outcomes. People may debate standards of justice, but they should be able to have confidence that models are applying those standards fairly.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox