OpenAI raised eyebrows in February when it announced — and withheld — the full version of its groundbreaking language model, GPT-2. Six months later, the company has re-examined the decision.

What happened: The for-profit research organization issued a follow-up report that details the results of GPT-2’s “staged release.” Fearing that the full version would be used to generate convincing misinformation, OpenAI initially released a limited version (124 million parameters). That release was followed by larger versions culminating, so far, in a 774 million-parameter model made available along with the report.

What the report says: Releasing the model in stages while allowing certain partners full access helped advance an understanding of both benign and malignant uses, the organization says. OpenAI remains convinced that staged release is “likely to be a key foundation of responsible publication of AI.”

  • Research by OpenAI’s partners confirms that GPT-2’s output can be as credible as New York Times articles and very difficult to detect.
  • The report found no malicious uses of available GPT-2 versions.
  • OpenAI is aware of five other groups that replicated the complete model. Some of them also opted for a staged release.
  • It’s working with several universities to study the social and policy implications of larger GPT-2 models.
  • OpenAI plans to release the complete version (1.5 billion parameters) in coming months, barring adverse consequences of smaller releases.

Behind the news: OpenAI’s decision to withhold the complete GPT-2 rankled many in the AI community; without the code and detailed training information, it’s impossible to replicate the results. However, the organization’s reticence didn’t stop a pair of Brown University graduate students, neither of whom had a background in language modeling, from replicating GPT-2 in August.

We’re thinking: The AI community thrives on shared information. Yet the potential for powerful AI models to wreak havoc on the general welfare suggests some sort of gatekeeping mechanism is in order. Staged release may be just the device that white hats need to stay one step ahead of malefactors.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox