Meta’s effort to make a large language model available to researchers ended with its escape into the wild.
What’s new: Soon after Meta started accepting applications for developer access to LLaMA, a family of trained large language models, a user on the social network 4chan posted a downloadable BitTorrent link to the entire package, The Verge reported.
How it works: LLaMA includes transformer-based models with 7 billion, 13 billion, 33 billion, and 65 billion parameters. The models were trained on Common Crawl, GitHub, Wikipedia, Project Gutenberg, ArXiv, and Stack Exchange. Tested on 20 zero- and few-shot tasks, LLaMA outperformed GPT-3 on all tasks, Chinchilla on all but one, and PaLM on all but two.
Escape: On February 24, Meta had offered LLaMA to researchers at institutions, government agencies, and nongovernmental organizations who requested access and agreed to a noncommercial license. A week later, 4chan leaked it.
- Users promptly hosted the model on sites including GitHub and Hugging Face. Meta filed takedown requests.
- Users adapted it to widely available hardware. One ran the 65 billion-parameter model on a single Nvidia A100. Computer scientist Simon Willison implemented the 13 billion-parameter version on a MacBook Pro M2 with 64 gigabytes of RAM.
- Alfredo Ortega, a software engineer and user of 4chan, which is infamous for hosting objectionable content, implemented the 13 billion-parameter LLaMA as a Discord chatbot. Users have prompted the program (nicknamed BasedGPT) to output hate speech. Ortega noted that his implementation was a legitimate download.
Behind the news: Efforts to release similar models are ongoing even as the AI community continues to debate the potential risks and rewards. Those who favor limited access cite safety concerns believe that institutions are best positioned to study models and learn to control them. Proponents of open access argue that free enquiry offers the best route to innovation and social benefit.
Why it matters: LLaMA gives experimenters, small developers, and members of the general public unprecedented access to cutting-edge AI. Such access likely will enable valuable scientific, practical, and commercial experimentation. While the risk of harm via automated generation of effective spam, scams, propaganda, disinformation, and other undesirable outputs is real, open source projects like BLOOM and GPT-NeoX-20B have led to significantly more benefit than harm — so far.
We’re thinking: Making models like LLaMA widely available is important for further research. Ironically, bad actors will use the leaked LLaMA, while conscientious researchers will respect Meta’s copyright and abide by the rules. For instance, Stanford researchers announced Alpaca, a LLaMA variant that’s fine-tuned to follow instructions. However, the Stanford team is holding back the trained weights while it discusses the matter with Meta. Considering the potential benefits and harms of restricted release versus openness, openness creates more benefits all around.