Princeton University

4 Posts

Examples of InstaHide scrambling images
Princeton University

A Privacy Threat Revealed: How researchers cracked InstaHide for computer vision.

With access to a trained model, an attacker can use a reconstruction attack to approximate its training data. A method called InstaHide recently won acclaim for promising to make such examples unrecognizable to human eyes while retaining their utility for training.
Data and examples related to IMLE-GAN
Princeton University

Making GANs More Inclusive: A technique to help GANs represent their datasets fairly

A typical GAN’s output doesn’t necessarily reflect the data distribution of its training set. Instead, GANs are prone to modeling the majority of the training distribution, sometimes ignoring rare attributes — say, faces that represent minority populations.
Results of a technique that interprets reflected light to reveal objects outside the line of sight
Princeton University

Periscope Vision: Researchers used deep learning to see around corners.

Wouldn’t it be great to see around corners? Deep learning researchers are working on it. Researchers developed deep-inverse correlography, a technique that interprets reflected light to reveal objects outside the line of sight.
ImageNet face recognition labels on a picture
Princeton University

ImageNet Gets a Makeover: The effort to remove bias from ImageNet

Computer scientists are struggling to purge bias from one of AI’s most important datasets. ImageNet’s 14 million photos are a go-to collection for training computer-vision systems, yet their descriptive labels have been rife with derogatory and stereotyped attitudes toward race, gender, and sex.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox