What the AI Community Can Learn from the Galactica Incident Meta released and quickly withdrew a demo of its Galactica language model. Here's what went wrong and how we can avoid It.

Published
Reading time
3 min read
Screen capture of Galactica demo

Dear friends,

Last week, Facebook’s parent company Meta released a demo of Galactica, a large language model trained on 48 million scientific articles. Two days later, amid controversy regarding the model’s potential to generate false or misleading articles, the company withdrew it.

Is Galactica dangerous? How should researchers, as well as the broader AI community, approach such developments?

Michael Black, director of the Max Planck Institute for Intelligent Systems, raised concern about Galactica’s potential for harm by generating seemingly authoritative scientific papers that are factually bonkers. Meta chief AI scientist Yann LeCun vigorously defended the model. He pointed out that, despite worries that people might misuse large language models (LLMs), it largely hasn’t happened.

At the risk of offending both sides, let me share my take.

  • I support the Galactica researchers. Their scientific work on large language models is technically interesting and impressive. Their model does well on tasks such as mathematical reasoning and answering multiple-choice questions.
  • When a technology shows potential to cause significant harm, it’s important to carefully assess the likely benefits against the likely harm. One problem with the way Galactica was released is that we don’t yet have a robust framework for understanding of the balance of benefit versus harm for this model, and different people have very different opinions. Reading through the paper, I see potential for exciting use cases. I also see risk of large-scale fakery that could cause harm. While I support the technical work, I would prefer that the demo had been released only after a more thorough assessment.
  • Prior to a careful analysis of benefit versus harm, I would not recommend “move fast and break things” as a recipe for releasing any product with potential for significant harm. I would love to see more extensive work — perhaps through limited-access trials — that validates the product’s utility to third parties, explores and develops ways to ameliorate harm, and documents this thinking clearly.
  • That said, I would also love to see less vitriol toward researchers who are trying to do their best. People will differ on the best path forward, and all of us sometimes will be right and sometimes will be wrong. I believe the Meta researchers are trying to do their best. Whether we agree or disagree with their approach, I hope we’ll treat them with respect.
  • Part of the disagreement likely stemmed from widespread distrust of Meta, where a focus on maximizing user engagement has contributed to social polarization and spread of disinformation. If a lesser-known or more-trusted company had released Galactica, I imagine that it would have had more leeway. For instance, Stability AI released its Stable Diffusion text-to-image model with few safeguards. The company faced little criticism, and so far the model has spurred great creativity and little harm. I don’t think this is necessarily an unfair way to approach companies. A company’s track record does matter. Considering the comparatively large resources big companies can use to drive widespread awareness and adoption of new products, it’s reasonable to hold them to a higher standard.
  • The authors withdrew the model shortly after the controversy arose. Kudos to them for acting in good faith and responding quickly to the community’s concerns.

When it comes to building language models that generate more factually accurate output, the technical path forward is not yet clear. LLMs are trained to maximize the likelihood of text in their training set. This leads them to generate text that sounds plausible — but a LLM that makes up facts can also perform well on this training objective.

Some engineers (including the Galactica’s team) have proposed that LLMs could be an alternative to search engines. For example, instead of using search to find out the distance to the Moon, why not pose the question as a prompt to a language model and let it answer? Unfortunately, the maximum-likelihood objective is not well aligned with the goal of providing factually accurate information. To make LLMs better at conveying facts, research remains to be done on alternative training objectives or, more likely, model architectures that optimize for factual accuracy rather than likelihood.

Whether a tool like Galactica will be more helpful or harmful to society is not yet clear to me. There will be bumps in the rollout of any powerful technology. The AI community has produced racist algorithms, toxic chatbots, and other problematic systems, and each was a chance to learn from the incident and get better. Let’s continue to work together as a community, get through the bumps with respect and support for one another, and keep building software that helps people.

Keep learning!

Andrew

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox