Angry emoji over dozens of Facebook like buttons
Facebook

Facebook Likes Extreme Content: Facebook execs rejected changes to reduce polarization.

Facebook’s leadership has thwarted changes in its algorithms aimed at making the site less polarizing, according to the Wall Street Journal. The social network’s own researchers determined that its AI software promotes divisive content.
Generative BST example and graph
Facebook

Big Bot Makes Small Talk: A research summary of Facebook's Generative BST chatbot

Facebook recently rolled out its entry in the World’s Biggest Chatbot sweepstakes. In keeping with the company’s social-networking dominance, the bot is designed to excel at chitchat on any subject.
False information about Covid-19 on Facebook posts
Facebook

Information Warfare Covid Edition: Facebook used humans and AI to spot false Covid claims.

Facebook’s AI can’t spot Covid-19 disinformation on its own. But with human help, it can slow the spread. Facebook uses a combination of humans and neural nets to crack down on messages that make false claims about Covid-19, which may have deadly consequences.
Data and graphs related to batch normalization
Facebook

Outside the Norm: Batch normalization contributes to neural network accuracy.

Batch normalization is a technique that normalizes layer outputs to accelerate neural network training. But new research shows that it has other effects that may be more important.
Info about radioactive data
Facebook

X Marks the Dataset: Radioacive data helps trace a model's training corpus.

Which dataset was used to train a given model? A new method makes it possible to see traces of the training corpus in a model’s output.
Association for the Advancement of Artificial Intelligence conference in New York
Facebook

Meeting of the Minds: Deep learning pioneers discuss the state of AI.

Geoffrey Hinton, Yoshua Bengio, and Yann LeCun presented their latest thinking about deep learning’s limitations and how to overcome them.
Information related to the kNN-LM algorithm
Facebook

Helpful Neighbors: A research summary of the kNN-LM language model

School teachers may not like to hear this, but sometimes you get the best answer by peeking at your neighbor’s paper. A new language model framework peeks at the training data for context when making a prediction.
Math equations represented as trees
Facebook

Neural Networks Study Math: A sequence to sequence model for solving math problems.

In tasks that involve generating natural language, neural networks often map an input sequence of words to an output sequence of words. Facebook researchers used a similar technique on sequences of mathematical symbols, training a model to map math problems to math solutions.
AI-generated face
Facebook

Facebook vs Deepfakes: How Facebook cracked down on deepfakes

Facebook announced a ban on deepfake videos, on the heels of a crackdown on counterfeit profiles that used AI-generated faces. Facebook declared this week that it will remove deepfake videos it deems deliberately misleading.
Yann LeCun
Facebook

Yann LeCun — Learning From Observation: The power of self-supervised learning

How is it that many people learn to drive a car fairly safely in 20 hours of practice, while current imitation learning algorithms take hundreds of thousands of hours, and reinforcement learning algorithms take millions of hours? Clearly we’re missing something big.
Illustration of a fireplace with "Happy holidays" cards in English, Spanish and French
Facebook

Natural Language Processing Models Get Literate: Why 2019 was a breakthrough year for NLP

Earlier language models powered by Word2Vec and GloVe embeddings yielded confused chatbots, grammar tools with middle-school reading comprehension, and not-half-bad translations. The latest generation is so good, some people consider it dangerous.
Illustration of three identical reindeers
Facebook

Deepfakes Go Mainstream: Why 2019 was a big year for deepfakes

Society awakened to the delight, threat, and sheer weirdness of realistic images and other media dreamed up by computers.
Sesame Street characters together
Facebook

Inside AI’s Muppet Empire: Why Are So Many NLP Models Named After Muppets?

As language models show increasing power, a parallel trend has received less notice: The vogue for naming models after characters in the children’s TV show Sesame Street.
Information about a model for multi-document summarization and question answering
Facebook

Bigger Corpora, Better Answers: Using knowledge graphs to improve question answering NLP

Models that summarize documents and answer questions work pretty well with limited source material, but they can slip into incoherence when they draw from a sizeable corpus. Recent work addresses this problem.
Word vectors
Facebook

Finer Tuning

A word-embedding model typically learns vector representations from a large, general-purpose corpus like Google News. But to make the resulting vectors useful in a specialized domain, they must be fine-tuned on a smaller, domain-specific dataset. Researchers offer a more accurate method.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox