Princeton University

4 Posts

Examples of InstaHide scrambling images
Princeton University

A Privacy Threat Revealed

With access to a trained model, an attacker can use a reconstruction attack to approximate its training data. A method called InstaHide recently won acclaim for promising to make such examples unrecognizable to human eyes while retaining their utility for training.
2 min read
Data and examples related to IMLE-GAN
Princeton University

Making GANs More Inclusive

A typical GAN’s output doesn’t necessarily reflect the data distribution of its training set. Instead, GANs are prone to modeling the majority of the training distribution, sometimes ignoring rare attributes — say, faces that represent minority populations.
2 min read
Results of a technique that interprets reflected light to reveal objects outside the line of sight
Princeton University

Periscope Vision

Wouldn’t it be great to see around corners? Deep learning researchers are working on it. Researchers developed deep-inverse correlography, a technique that interprets reflected light to reveal objects outside the line of sight.
2 min read
ImageNet face recognition labels on a picture
Princeton University

ImageNet Gets a Makeover

Computer scientists are struggling to purge bias from one of AI’s most important datasets. ImageNet’s 14 million photos are a go-to collection for training computer-vision systems, yet their descriptive labels have been rife with derogatory and stereotyped attitudes toward race, gender, and sex.
2 min read

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox