Regulators are forcing Meta (formerly Facebook) to display certain advertisements more evenly across its membership.

What’s new: The United States government compelled Meta to revise its ad-placement system to deliver ads for housing to members regardless of their age, gender, or ethnicity. The company is voluntarily rebalancing its distribution of ads for credit and employment as well.

How it’s changed: The new algorithm will control ads that appear to U.S. users of Facebook, Instagram, and Messenger. Meta will roll it out by December.

  • The company now allows advertisers to define the eligible audience for an ad based on variables like location, interests, and online activity but not age, sex, race, or ethnicity.
  • For any given ad, the algorithm periodically monitors and corrects for differences between the actual and eligible audiences. Say, the system serves a housing ad that’s intended for park-going birdwatchers in New York City. If it ends up being viewed only by park-going, bird-watching Latina women in their 40s, the algorithm will retarget it in a way that’s more likely to reach people of other ethnic backgrounds, genders, and ages.
  • It assigns a heavier weight to members of the eligible audience who have viewed more ads in the last 30 days. The settlement doesn’t explain the reason for this requirement, which appears to encourage the system to show more ads to more-active users.
  • Meta will report the algorithm’s results every four months to the U.S. Justice Department and a third party nominated by the company and approved by the government.

Behind the news: The update is part of a settlement between Meta and the U.S. Justice Department, which found that the company had violated laws against discrimination in housing. Meta also agreed to terminate a different system that was intended to enforce a more even distribution of ads but was found to have the opposite effect. It will pay a fine of $115,054, the maximum penalty under the law.

Why it matters: AI technology is largely unregulated in the U.S. But that doesn’t mean the federal government has no jurisdiction over it, especially when it migrates into highly regulated sectors. Facebook once hosted ads for credit cards that excluded younger people, job postings that excluded women, and housing ads that excluded people by race. Regulators who oversee civil rights didn’t settle for mere changes in Meta’s advertising guidelines and ultimately forced it to alter the algorithm itself.

We’re thinking: Meta’s periodic reports will provide some evidence whether or not regulation can mitigate algorithmic bias. Still, we wonder whether regulators can craft effective rules. Data can be sliced in a variety of ways, and it can be very difficult to detect bias against a particular group within a slice. For example, a system that appears not to discriminate by gender on average may do so, say, within a particular type of town or when handling a certain sort of housing. Given the slow progress of legislation and the rapid development of technology, we worry that regulators will always trail the companies they regulate.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox