Process of labeling doctors' notes
Machine Learning Research

Cracking Open Doctors’ Notes

Weak supervision is the practice of assigning likely labels to unlabeled data using a variety of simple labeling functions. Then supervised methods can be used on top of the now-labeled data.
2 min read
An illustration of filter pruning
Machine Learning Research

High Accuracy, Low Compute

As neural networks have become more accurate, they’ve also ballooned in size and computational cost. That makes many state-of-the-art models impractical to run on phones and potentially smaller, less powerful devices.
2 min read
Proposed model for abstractive summarization of a scientific article
Machine Learning Research

Two Steps to Better Summaries

Summarizing a document using original words is a longstanding problem for natural language processing. Researchers recently took a step toward human-level performance in this task, known as abstractive summarization, as opposed to extractive summarization.
1 min read
Pipeline for identifying sentences containing evidence of SDIs and SSIs
Machine Learning Research

Hidden Findings Revealed

Drugs undergo rigorous experimentation and clinical trials to gain regulatory approval, while dietary supplements get less scrutiny. Even when a drug study reveals an interaction with supplements, the discovery tends to receive little attention.
2 min read
DeepPrivacy results on a diverse set of images
Machine Learning Research

Anonymous Faces

A number of countries restrict commercial use of personal data without consent unless they’re fully anonymized. A new paper proposes a way to anonymize images of faces, purportedly without degrading their usefulness in applications that rely on face recognition.
2 min read
GPT-2 text generator
Machine Learning Research

Putting Text Generators on a Leash

Despite dramatic recent progress, natural language generation remains an iffy proposition. Even users of the muscular GPT-2 text generator have to press the button a number of times to get sensible output. But researchers are figuring out how to exert greater control over generated text.
2 min read
 Proportion of examples covered by number of annotators (sorted by number of annotations)
Machine Learning Research

AI Knows Who Labeled the Data

The latest language models are great at answering questions about a given text passage. However, these models are also powerful enough to recognize an individual writer’s style, which can clue them in to the right answers. New research measures such annotator bias in several data sets.
2 min read
Google text-to-speech logo
Machine Learning Research

The Long and Short of It

Not long ago, text-to-speech systems could read only a sentence at a time, and they were ranked according to their ability to accomplish that limited task. Now that they can orate entire books, we need new benchmarks.
1 min read
Average Relative WER improvement as a function of the amount of training data
Machine Learning Research

Speech Recognition With an Accent

Models that achieve state-of-the-art performance in automatic speech recognition (ASR) often perform poorly on nonstandard speech. New research offers methods to make ASR more useful to users with heavy accents or speech impairment.
2 min read
Continuous Planner for One-Shot Imitation Learning
Machine Learning Research

Working Through Uncertainty

How to build robots that respond to novel situations? When prior experience is limited, enabling a model to describe its uncertainty can enable it to explore more avenues to success.
2 min read
Graph related to Language Model Analysis (LAMA)
Machine Learning Research

What Language Models Know

Watson set a high bar for language understanding in 2011, when it famously whipped human competitors in the televised trivia game show Jeopardy! IBM’s special-purpose AI required around $1 billion. Research suggests that today’s best language models can accomplish similar tasks right off the shelf.
2 min read
Arcade game
Machine Learning Research

Leveling the Playing Field

Deep reinforcement learning has given machines apparent hegemony in vintage Atari games, but their scores have been hard to compare — with one another or with human performance — because there are no rules governing what machines can and can’t do to win. Researchers aim to change that.
2 min read
Neural Point Based Graphics technique producing a realistic image
Machine Learning Research

Points Paint the Picture

Creating a virtual representation of a scene using traditional polygons and texture maps involves several complex operations, and even neural-network approaches have required manual preprocessing. Researchers propose a new deep-learning pipeline that visualizes scenes with far less fuss.
2 min read
Illustration of Facebook AI Research method to compress neural networks
Machine Learning Research

Honey, I Shrunk the Network!

Deep learning models can be unwieldy and often impractical to run on smaller devices without major modification. Researchers at Facebook AI Research found a way to compress neural networks with minimal sacrifice in accuracy.
2 min read
Calibration plot for ImageNet
Machine Learning Research

Scaling Bayes

Neural networks are good at making predictions, but they’re not so good at estimating how certain they are. If the training data set is small and many sets of model parameters fit the data well, for instance, the network may not realize this explicitly, leading to overly confident predictions.
2 min read

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox