Tech companies generally try to be (or to appear to be) socially responsible. Would some rather let AI’s negative impacts slide?

The fear: Companies with the know-how to apply AI at scale dominate the information economy. This gives them an overpowering incentive to release harmful products and services, jettison internal checks and balances, buy or lie their way out of regulations, and ignore the trail of damage in their wake.

Horror stories: When you move fast and break things, things get broken.

  • Documents leaked by a former Facebook product manager have prompted scrutiny from the company’s oversight board and government officials. The leaks reveal, among other things, that the social network’s XCheck program exempts many politicians, celebrities, and journalists from its content moderation policies, enabling them to spread misinformation and incitements to violence with impunity.
  • Google parted acrimoniously with Timnit Gebru, former co-lead of its Ethical AI division, after she produced research critical of the company’s natural language models. Soon afterward, it fired her colleague Margaret Mitchell. Observers have said the company’s ethical AI effort is “in limbo.”
  • Tesla, whose self-driving features have been implicated in numerous accidents, is recruiting beta testers for its next-generation software. Applicants must allow the company to monitor their driving, and the company says it accepts only drivers who demonstrate perfect safety — but Twitter posts revealed that it accepted a low-scoring investor. The U.S. National Highway Transportation and Safety Administration has opened an investigation into the software’s role in 11 crashes with emergency vehicles.

Is a corporate dystopia inevitable? So far, most government moves to regulate AI have been more bark than bite.

  • The European Union proposed tiers of restriction based on how much risk an algorithm poses to society. But critics say the proposal defines risk too narrowly and lacks mechanisms for holding companies accountable.
  • U.S. lawmakers have summoned Big Tech executives to testify on their companies’ roles in numerous controversies, but regulations have gained little traction — possibly due to the vast sums of money the companies spend on lobbying.

Facing the fear: Some tech giants have demonstrated an inability to restrain themselves, strengthening arguments in favor of regulating AI. At the same time, AI companies themselves must publicly define acceptable impacts and establish regular independent audits to detect and mitigate harm. Ultimately, AI practitioners who build, deploy, and distribute the technology are responsible for ensuring that their work brings a substantial net benefit.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox