Algorithms Control the Capital The 29 Algorithms Used by Washington, D.C.

Published
Reading time
2 min read
Series of slides showing different algorithms used by Washington D.C. to streamline their operations

A new report offers a rare peek into the use of automated decision-making tools by the government of a major city.

What’s new: Municipal agencies in the U.S. capital of Washington, D.C., use at least 29 algorithms to streamline their operations, according to a study by the Electronic Privacy Information Center. The authors found references to the models in public records and internal documents. In many cases, their roles were not widely known.

How it works: The algorithms span a variety of municipal functions.

  • Criminal justice: The Department of Youth and Rehabilitative Services developed a model that estimates a juvenile offender’s likelihood of committing new crimes using data such as school attendance and prior court cases. The police department uses systems including ShotSpotter to locate the source of gunfire and TrapWire to find patterns in reports of suspicious behavior.
  • Economic opportunity: The Department of Human Services and Department of Employment Services score recipients of welfare and unemployment insurance on their risk of committing fraud via FraudCaster and Case Tracker.
  • Education: The University of the District of Columbia identifies students at risk of failing to graduate using a tool created by the Education Advisory Board, a for-profit consultancy.
  • Health: The city’s Office of Veterans Affairs developed a model that scores the risk of death for Covid-19 patients.
  • Housing: The city’s Department of Buildings scores a building’s risk of code violations, based on data such as the building’s age and prior history of code violations, via an algorithm developed by Georgetown University.

Behind the news: Washington, D.C. lawmakers are considering a law that would require regular audits of decision-making algorithms used by organizations of a particular size and those that hold data on city residents. It would also enable the Office of the Attorney General and others to sue for violations.

Yes, but: While the authors discovered many automated decision-making systems in use, many more may be hidden from view. Several city agencies didn’t respond to requests for public records citing confidentiality and trade-secret agreements with vendors. New York City police were found to be using more algorithms than those the department had disclosed to officials as required by a 2020 law, Wired reported. Public registries in Amsterdam and Helsinki list only 10 out of 30 algorithms that have been disclosed in separate documents.

Why it matters: AI is reaching into a wide variety of government functions that have a direct and substantial impact on citizens’ lives. While the technology can help officials make decisions that are more efficient and sometimes more fair, their constituents need to know how their government operates and have a right to hold algorithms (and the officials who employ them) accountable for their decisions. Governments should supply this information as a matter of course, rather than forcing independent researchers to discover it.

We’re thinking: The term “smart city” shouldn’t just describe the algorithms used to govern the municipality. It should also describe a population that’s informed about how they’re being used.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox