Published
Reading time
2 min read
ChatGPT on the right with a forbidden sign as different logos appear on the left

The breakout text generator faces resistance — even within the AI community.

What's new: Organizations including the International Conference on Machine Learning (ICML) and the New York Department of Education banned OpenAI's ChatGPT amid debate over the implications of its use and limitations of its output.

What happened: Professional societies, schools, and social media sites alike reacted to the potential of ChatGPT and other large language models to produce falsehoods, socially biased information, and other undesirable output in the guise of reasonable-sounding text.

  • The organizers of the upcoming ICML in Honolulu prohibited paper submissions that include text generated by large language models, including ChatGPT, unless the text is included for analytical purposes. They cited including novelty and ownership of generated material. However, the conference will allow papers with text that has been polished using AI-powered services like Grammarly. The organizers plan to re-evaluate the policy in advance of the 2024 meeting in Vienna.
  • New York City blocked access to ChatGPT in the city's 1,851 public schools, which serve over one million students. Officials expressed concern that the tool enables plagiarism and generates falsehoods.
  • Social media app WeChat prohibited a mini-program that allowed users to access ChatGPT from within the app.
  • In December, question-and-answer website Stack Overflow banned ChatGPT-generated content due to the model's propensity for outputting incorrect answers to technical questions.

Behind the news: Researchers have raised red flags around the issues that have prompted organizations to ban ChatGPT since large language models first showed a propensity to generate plausible but unreliable text. The latest efforts seek to identify generated output.

  • OpenAI aims to embed cryptographic tags into ChatGPT’s output to watermark the text. The organization told TechCrunch it’s working on other approaches to identify the model’s output.
  • Princeton University student Edward Tian built GPTZero, an app that determines if a passage's author was human or machine by examining the randomness of its words and sentences. Humans are more prone to use unpredictable words and write sentences with dissimilar styles.

Yes, but: Users may find ways to circumvent safeguards. For instance, OpenAI’s watermarking proposal can be defeated by lightly rewording the text, MIT computer science professor Srini Devadas told TechCrunch. The result could be an ongoing cat-and-mouse struggle between users and model-makers.

Why it matters: Many observers worry that generative text will disrupt society. EvenOpenAI CEO Sam Altman tweeted that the model was currently unsuitable for real-world tasks due to its deficiencies in truth-telling. Bans are an understandable, if regrettable, reaction by authorities who feel threatened by the increasingly sophisticated abilities of large language models.

We're thinking: Math teachers once protested the presence of calculators in the classroom. Since then, they’ve learned to integrate these tools into their lessons. We urge authorities to take a similarly forward-looking approach to assistance from AI.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox