Hiding in Plain Sight

Published
Reading time
1 min read
Example of adversarial patches against YOLOv2

Harry Potter’s magical cloak made him invisible to his Hogwarts colleagues. Now researchers have created a sort of invisibility shield to hide people from computer vision.

What’s new: The researchers created an adversarial patch that makes people invisible to the YOLOv2 object detection model. Hold the patch in front of a camera, and YOLOv2 can’t see anything. In this video, the befuddled recognizer seems to scratch its head as researchers pass the patch back and forth.

How they did it: Simen Thys, Wiebe Van Ranst, and Toon Goedemé fed a variety of photos into a YOLOv2 network. They altered the photos to minimize the model’s "objectness" score. The image with the lowest score was an altered picture of people holding colorful umbrellas. They composited this image into photos of people, ran those through the network, and tweaked the patch further to achieve still lower objectness scores.

Why it matters: Much research into adversarial attacks has focused on targets that don’t vary from instance to instance, such as stop signs or traffic lights. Held in front of a person, the new patch blinds YOLOv2 regardless of differences such as clothing, skin color, size, pose, or setting. It can be reproduced by a digital printer, making it practical for real-world attacks.

The catch: The patch works only on YOLOv2. It doesn’t transfer well to different architectures.

What’s next: The researchers contemplate generalizing the patch to confuse other recognizers. They also envision producing adversarial clothing. Perhaps a cloak?

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox