Face recognition algorithms have come under scrutiny for misidentifying individuals. A U.S. government agency tested over 1,000 of them to see which are the most reliable.

What’s new: The National Institute of Standards and Technology (NIST) released the latest results of its ongoing Face Recognition Vendor Test. Several showed marked improvement over the previous round.

How it works: More than 300 developers submitted 1,014 algorithms to at least one of four tests. The test datasets included mugshots of adults, visa photos, and images of child exploitation.

  • The verification test evaluated one-to-one face recognition like that used by smartphones for face-ID security, customs officials to match travelers with passports, and law enforcement agencies to identify victims in photos. Top performers included entries by China’s SenseTime, Netherlands-based VisionLabs (whose work is illustrated in the video above), and the open-source project InsightFace.
  • The identification test evaluated one-to-many algorithms such as those used by closed-circuit surveillance systems that find flagged individuals in crowds of people. Top performers included those from SenseTime, Japan’s NEC, and CloudWalk, a spin-out from the Chinese Academy of Sciences.
  • A test for face morphing evaluated how well an algorithm could detect processing that aims to fool security systems by blending faces. Top performers included entries by Portugal’s University of Coimbra and Germany’s Darmstadt University of Applied Sciences.
  • The agency also rated algorithms that assess image quality for face recognition with respect to factors like lighting and angle. Algorithms from U.S.-based Rank One and Russia-based Tevian performed best.

Behind the news: NIST has benchmarked progress in face recognition since 2000. The first test evaluated five companies on a single government-sponsored image database. In 2018, thanks to deep learning, more than 30 developers beat a high score set in 2013.

Why it matters: Top-scoring vendors including Clearview AI, NtechLab, and SenseTime have been plagued by complaints that their products are inaccurate, prone to abuse, and threatening to individual liberty. These evaluations highlight progress toward more reliable algorithms, which may help win over critics.

We’re thinking: Companies that make face recognition systems need to undertake rigorous, periodic auditing. The NIST tests are a great start, and we need to go farther still. For instance, ClearView AI founder Hoan Ton-That called his company's high score on the NIST one-to-one task an “unmistakable validation” after widespread critiques of the company’s unproven accuracy and lack of transparency. Yet ClearView AI didn’t participate in the test that evaluated an algorithm’s ability to pick out an individual from a large collection of photos — the heart of its appeal to law enforcement.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox