What if AI-enabled monitoring isn’t just for dictators and despots?
The fear: Under the pretext of maintaining law and order, even countries founded on a commitment to individual rights allow police to take advantage of smart-city infrastructure and smart-home devices. The ability to spy on citizens is rife with moral hazards and opens the door to authoritarian control.
Horror stories: Law enforcement agencies worldwide have found AI-driven surveillance irresistible. Reports of deals between police and vendors portend further invasive practices to come.
- In the U.S., thousands of state and local police officers have used Clearview AI to identify faces without obtaining permission from their superiors (or people whose photos trained the system).
- Flock Safety, a U.S. maker of license plate readers, offers access to a nationwide network of cameras. Over 400 police agencies had signed on as of late 2019.
- A London face recognition system draws on cameras throughout the city to alert nearby police officers when it identifies a person of interest.
- Police in India allegedly have used face recognition to target protestors of a controversial citizenship law. Legal inquiries have raised questions about the system’s accuracy.
Panopticon now? Most Americans believe that, in the hands of law enforcement, face recognition will make society safer. Yet such systems are notoriously prone to misuse, inaccuracy, and bias. Several U.S. cities and states have passed laws that restrict or ban police use of face recognition, and others are considering similar legislation. The European Parliament recently passed a nonbinding ban on the practice.
Facing the fear: Society should guarantee basic rights to privacy. That said, the impulse to ban face recognition carries its own danger. Ceding AI development to repressive regimes risks a proliferation of systems that enable repressive uses. Instead, elected leaders should establish rules to ensure that such systems are transparent, auditable, explainable, and secure.