Beijing Academy of Artificial Intelligence

5 Posts

Illustration of how different data split strategies partition the labelled data
Beijing Academy of Artificial Intelligence

Fine-Tune Your Fine-Tuning: New method optimizes training for few shot NLP models.

Let’s say you have a pretrained language model and a small amount of data to fine-tune it to answer yes-or-no questions. Should you fine-tune it to classify yes/no or to fill in missing words β€” both viable approaches that are likely to yield different results?
Yoav Shoham
Beijing Academy of Artificial Intelligence

Yoav Shoham: Language models that reason

I believe that natural language processing in 2022 will re-embrace symbolic reasoning, harmonizing it with the statistical operation of modern neural networks. Let me explain what I mean by this.
Illustration of giant Christmas tree in a town plaza
Beijing Academy of Artificial Intelligence

Trillions of Parameters: Are AI models with trillions of parameters the new normal?

The trend toward ever-larger models crossed the threshold from immense to ginormous. Google kicked off 2021 with Switch Transformer, the first published work to exceed a trillion parameters, weighing in at 1.6 trillion.
Two images showing RETRO Architecture and Gopher (280B) vs State of the Art
Beijing Academy of Artificial Intelligence

Large Language Models Shrink: Gopher and RETRO prove lean language models can push boundaries.

DeepMind released three papers that push the boundaries β€” and examine the issues β€” of large language models.
CogView home website
Beijing Academy of Artificial Intelligence

Large Language Models for Chinese: A brief overview of the WuDao family

Researchers unveiled competition for the reigning large language model GPT-3. Four models collectively called Wu Dao were described by Beijing Academy of Artificial Intelligence, a research collective funded by the Chinese government, according to Synced Review.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox