Italy blocked ChatGPT after determining that it violates European Union laws.
What’s new: The Guarantor for the Protection of Personal Data suspended access to ChatGPT for 20 days after saying that OpenAI enables underage children to use the chatbot, distributes misinformation about people, and collects personal data used to train its models without proper authority.
The ruling: The Guarantor, which enforces the rules in Italy, banned ChatGPT for 20 days citing four concerns: The chatbot doesn’t prevent children under 13 from using it, the chatbot can provide inaccurate information about individuals, OpenAI did not inform individuals that the firm was collecting data that could be used to identify them, and OpenAI did not meet the EU privacy law’s guidelines for collecting personal data.
- The Guarantor gave OpenAI 20 days to respond with a plan that would address these issues. Failure to comply would have resulted in a fine worth 4 percent of OpenAI’s global revenue, which the company expects to exceed $200 million.
- On April 8, OpenAI submitted an unspecified plan. OpenAI CEO Sam Altman previously tweeted that he believed ChatGPT did comply with EU law.
Behind the news: Privacy regulators in Europe and the United States have their eyes on AI.
- Earlier this year, the same Italian regulator deemed the AI chatbot Replika a threat to emotionally vulnerable individuals. The regulator ordered the developer to cease processing Italian users’ data or face a €20 million fine.
- In 2022, British, French, Greek, and Italian authorities issued concurrent fines to Clearview AI, which provides face recognition services to law enforcement agencies, and ordered the company to delete personal data that described their citizens.
- In 2022, U.S. regulators ruled that Kurbo, a weight-loss app, violated a 1998 law that restricts collection of personal data from children under 13. The developer paid a fine, destroyed data, and disabled the app.
Yes, but: Not everyone in the Italian government agrees with the ruling. Matteo Salvini, one of the country’s two deputy prime ministers, criticized it as excessive.
Why it matters: A national (or international) ban on ChatGPT could have major implications for large language models, which rely on sprawling datasets and routinely output misinformation. It could also harm European innovation by blocking access to the latest technology. And it’s not just Italy: French, German, and Irish regulators reportedly are considering similar actions. Belgian regulators went a step further and called for an EU-wide discussion of data violations related to ChatGPT.
We’re thinking: Some of the regulators’ concerns may stem from a lack of transparency into how OpenAI trains its models. A more open approach might alleviate some fears.