Invisibility Cloak

This stylish pullover is a great way to stay warm this winter, whether in the office or on-the-go. It features a stay-dry microfleece lining, a modern fit, and adversarial patterns the evade most common object detectors. In this demonstration, the YOLOv2 detector is evaded using a pattern trained on the COCO dataset with a carefully constructed objective.

The paper “Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors” by
Zuxuan Wu, Ser-Nam Lim, Larry S. Davis, and Tom Goldstein is online at on https://arxiv.org/abs/1910.14667

Partially funded by Facebook AI

Adversarial Clothes

Italian Start Up ‘Capable’ founded by Rachele Didero, Federica Busani and Giovanni Maria Conti provides beautiful, rather pricey, adversarial clothes.

https://www.capable.design/shop

Image-Scaling Attacks in Machine Learning

https://scaling-attacks.net/

Before an image can be funneled through a weighted network it needs to be scaled down. Resolutions like 3000×2000 pixels are to large to be processed in computer vision. Current weighted networks operate at 128x128px or at similar resolutions, mostly below 300x300px.

Researcher at TU Braunschweig found that the scaling down process offers opportunity for adversarial pixels. Introduced into the larger originals at strategic points they disturb the scaling down of the image.

Universal Adversarials

These are adversarial attacks on several deep neural networks where a single universal adversarial can fool a model on an entire set of affected inputs. It expects a 90% evasion rate on undefended ImageNet pretrained networks. Kenneth T. Co, Luis Muñoz-González, Leslie Kanthan, Ben Glocker, Emil C. Lupu and described in an paper here: https://arxiv.org/abs/1911.10364

For more check this github repository: https://github.com/kenny-co/sgd-uap-torch#universal-adversarial-perturbations-on-pytorch

This is how they look for different convoluted weigthed networks:

Declassifier

Declassifier

https://thephotographersgallery.org.uk/declassifier

In a way this is a project, which is very close to what we do at adversarial.io. Philipp Schmitt’s Declassifier uses a computer vision algorithm trained on COCO (Common Objects in Context), an image dataset appropriated from Flickr users by Microsoft in 2014.

Within Schmitts’ original photographs certain objects get identified. These regions get overlaid with images that show the same kind of objects, and belong to the COCO data set from which the COCO neural network originally was trained. “If a car is identified in one of the photographs, all the cars included in the dataset that trained the algorithm surface on top of it.” (The Photographers Gallery)

It takes a while to grasp what’s going on, since this project leans to the more artsy side. I loved to play around with it.

When you click on the images a certificate for the original contribution of photography is issued, identifying the original contributor (whose participation get’s lost within the dataset).

Certificate

Debunking AI Myths

AImyths.org does just that: Looking into several claims about AI and then step by step correct or debunk them. A recommended read!