The Invisibility Sweater: A Breakthrough in Fooling AI Detection Systems

Article cover image

Want to disappear? There's a sweater for that!

Imagine a world where you could slip into a sweater and become invisible—not to human eyes but to artificial intelligence systems that rely on computer vision. A team of researchers from the University of Maryland, in collaboration with Facebook AI, has made this possible with their "invisibility sweater." Designed to fool common person-detection models, this innovation has taken adversarial AI techniques to a new level.

The Science Behind the Invisibility Sweater

At the core of this project is a study focused on adversarial attacks on object detectors. While most previous research has targeted classifiers—AI systems that assign labels to entire images—the University of Maryland team turned their attention to detectors. Detectors are responsible for localizing objects within an image, considering thousands of potential bounding boxes with varying positions and sizes. Fooling these detectors is far more challenging, but not impossible.

The Invisibility Sweater at work: AI fails to detect the wearer amidst a crowd. (Image credit: University of Maryland)


How the Sweater Works

The researchers first experimented with creating "universal adversarial patches" that could be applied digitally to obscure objects from detection models. However, they found that printing these patterns onto physical objects, like clothing, introduced complexities. Eventually, this led to the creation of the invisibility sweater—a stylish pullover printed with patterns that confuse common AI detectors.

The back view of the invisibility sweater showing the adversarial pattern. (Image credit: University of Maryland)


In a series of tests, including on "paper dolls" and posters, the team discovered that their patterned clothing significantly degraded the performance of person-detection models. For instance, in one experiment with the YOLOv2 model—a popular object detection algorithm—the sweater reduced the model's accuracy by roughly 50%.

A group test showcasing the effect of wearing the invisibility sweater. (Image credit: University of Maryland)

Challenges and Real-World Applications

While the invisibility sweater is an impressive proof of concept, the researchers acknowledge that it's not foolproof. When targeting object detectors, adversarial patterns must fool every potential bounding box, not just a single classifier output. This makes attacks on detectors inherently less reliable, though still effective in many scenarios.

A comparison of feature maps showing the impact of the adversarial pattern on AI detection accuracy. (Image credit: University of Maryland)

The Future of Adversarial AI Clothing

The success of this project raises intriguing questions about the future of adversarial AI and its real-world applications. While the current sweaters are still experimental, the principles they demonstrate could lead to the development of more sophisticated counter-surveillance tools and clothing lines. However, for those looking to truly "disappear," there is still room for improvement in reliability.

For more technical details and insights, you can visit the University of Maryland’s project page here and access the full research paper on Cornell’s arXiv server.

Subscribe And Stay In The Know

Subscribe to our blog today to receive valuable articles packed with actionable advice designed to enhance your online presence—for free!

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.