Want to disappear? There's a sweater for that!
Imagine a world where you could slip into a sweater and become invisible—not to human eyes but to artificial intelligence systems that rely on computer vision. A team of researchers from the University of Maryland, in collaboration with Facebook AI, has made this possible with their "invisibility sweater." Designed to fool common person-detection models, this innovation has taken adversarial AI techniques to a new level.
At the core of this project is a study focused on adversarial attacks on object detectors. While most previous research has targeted classifiers—AI systems that assign labels to entire images—the University of Maryland team turned their attention to detectors. Detectors are responsible for localizing objects within an image, considering thousands of potential bounding boxes with varying positions and sizes. Fooling these detectors is far more challenging, but not impossible.
The researchers first experimented with creating "universal adversarial patches" that could be applied digitally to obscure objects from detection models. However, they found that printing these patterns onto physical objects, like clothing, introduced complexities. Eventually, this led to the creation of the invisibility sweater—a stylish pullover printed with patterns that confuse common AI detectors.
In a series of tests, including on "paper dolls" and posters, the team discovered that their patterned clothing significantly degraded the performance of person-detection models. For instance, in one experiment with the YOLOv2 model—a popular object detection algorithm—the sweater reduced the model's accuracy by roughly 50%.
While the invisibility sweater is an impressive proof of concept, the researchers acknowledge that it's not foolproof. When targeting object detectors, adversarial patterns must fool every potential bounding box, not just a single classifier output. This makes attacks on detectors inherently less reliable, though still effective in many scenarios.
The success of this project raises intriguing questions about the future of adversarial AI and its real-world applications. While the current sweaters are still experimental, the principles they demonstrate could lead to the development of more sophisticated counter-surveillance tools and clothing lines. However, for those looking to truly "disappear," there is still room for improvement in reliability.
For more technical details and insights, you can visit the University of Maryland’s project page here and access the full research paper on Cornell’s arXiv server.
Subscribe to our blog today to receive valuable articles packed with actionable advice designed to enhance your online presence—for free!