Microsoft Cuts Ethics Squad Microsoft eliminates its Ethics & Society unit.

Published
Reading time
2 min read
The scale of justice with one side broken

Microsoft laid off an AI ethics team while charging ahead on products powered by OpenAI.

What’s new: On March 6, the tech giant dissolved the Ethics & Society unit in its Cognition group, which researches and builds AI services, amid ongoing cutbacks that have affected 10,000 workers to date, the tech-news outlet Platformer reported. Microsoft kept its Office of Responsible AI, which formulates ethical rules and principles, and related teams that advise senior leadership on responsible AI and help implement responsible AI tools in the cloud.

How it works: Ethics & Society was charged with ensuring that AI products and services were deployed according to Microsoft’s stated principles. At its 2020 peak, it included around 30 employees including engineers, designers, and philosophers. Some former members spoke with Platformer anonymously.

  • As business priorities shifted toward pushing AI products into production, the company moved Ethics & Society staff to other teams, leaving seven members prior to the recent layoffs.
  • Former team members said that the prior round of downsizing had made it difficult for them to do their jobs.
  • They also said that other teams often would not listen to their feedback. For example, Ethics & Society warned that Bing Image Creator, a text-to-image generator based on OpenAI’s DALL·E 2, would harm the earning potential of human artists and result in negative press. Microsoft launched the model without having implemented proposed strategies to mitigate the risk.

Behind the news: Microsoft isn’t the only major AI player to have shifted its approach to AI governance.

  • Earlier this month, OpenAI began providing access to GPT-4 without supplying information on its model architecture or dataset, a major departure from its founding ideal of openness. “In a few years, it’s going to be completely obvious to everyone that open-sourcing AI is just not wise,” OpenAI’s chief scientist Ilya Sutskever told The Verge.
  • In early 2021, Google restructured its responsible AI efforts, placing software engineer Marian Croak at the helm. The shuffling followed the acrimonious departures of two prominent ethics researchers.

Why it matters: Responsible AI remains as important as ever. The current generative AI gold rush is boosting companies’ motivation to profit from the latest developments or, at least, stave off potential disruption. It also incentivizes AI developers to fast-track generative models into production.

We’re thinking: Ethical oversight is indispensable. At the same time, recent developments are creating massive value, and companies must balance the potential risks against potential benefits. Despite fears that opening models like Stable Diffusion would lead to irresponsible use — which, indeed, has occurred — to date, the benefits appear to be vastly greater than the harms.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox