University of Wisconsin

2 Posts

A new framework that helps models “unlearn” information selectively and incrementally
University of Wisconsin

Deep Unlearning: AI Researchers Teach Models to Unlearn Data

Privacy advocates want deep learning systems to forget what they’ve learned. What’s new: Researchers are seeking ways to remove the influence of particular training examples, such as an individual’s personal information, from a trained model without affecting its performance, Wired reported.
Examples of InstaHide scrambling images
University of Wisconsin

A Privacy Threat Revealed: How researchers cracked InstaHide for computer vision.

With access to a trained model, an attacker can use a reconstruction attack to approximate its training data. A method called InstaHide recently won acclaim for promising to make such examples unrecognizable to human eyes while retaining their utility for training.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox