Invisibility Cloak

This stylish pullover is a great way to stay warm this winter, whether in the office or on-the-go. It features a stay-dry microfleece lining, a modern fit, and adversarial patterns the evade most common object detectors. In this demonstration, the YOLOv2 detector is evaded using a pattern trained on the COCO dataset with a carefully constructed objective.

The paper “Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors” by
Zuxuan Wu, Ser-Nam Lim, Larry S. Davis, and Tom Goldstein is online at on https://arxiv.org/abs/1910.14667

Partially funded by Facebook AI


Adversarial Clothes

Italian Start Up ‘Capable’ founded by Rachele Didero, Federica Busani and Giovanni Maria Conti provides beautiful, rather pricey, adversarial clothes.

https://www.capable.design/shop


Image-Scaling Attacks in Machine Learning

https://scaling-attacks.net/

Before an image can be funneled through a weighted network it needs to be scaled down. Resolutions like 3000×2000 pixels are to large to be processed in computer vision. Current weighted networks operate at 128x128px or at similar resolutions, mostly below 300x300px.

Researcher at TU Braunschweig found that the scaling down process offers opportunity for adversarial pixels. Introduced into the larger originals at strategic points they disturb the scaling down of the image.


Universal Adversarials

These are adversarial attacks on several deep neural networks where a single universal adversarial can fool a model on an entire set of affected inputs. It expects a 90% evasion rate on undefended ImageNet pretrained networks. Kenneth T. Co, Luis Muñoz-González, Leslie Kanthan, Ben Glocker, Emil C. Lupu and described in an paper here: https://arxiv.org/abs/1911.10364

For more check this github repository: https://github.com/kenny-co/sgd-uap-torch#universal-adversarial-perturbations-on-pytorch

This is how they look for different convoluted weigthed networks:


Declassifier

Declassifier

https://thephotographersgallery.org.uk/declassifier

In a way this is a project, which is very close to what we do at adversarial.io. Philipp Schmitt’s Declassifier uses a computer vision algorithm trained on COCO (Common Objects in Context), an image dataset appropriated from Flickr users by Microsoft in 2014.

Within Schmitts’ original photographs certain objects get identified. These regions get overlaid with images that show the same kind of objects, and belong to the COCO data set from which the COCO neural network originally was trained. “If a car is identified in one of the photographs, all the cars included in the dataset that trained the algorithm surface on top of it.” (The Photographers Gallery)

It takes a while to grasp what’s going on, since this project leans to the more artsy side. I loved to play around with it.

When you click on the images a certificate for the original contribution of photography is issued, identifying the original contributor (whose participation get’s lost within the dataset).

Certificate

Debunking AI Myths

AImyths.org does just that: Looking into several claims about AI and then step by step correct or debunk them. A recommended read!


Omitted Labels

red highlighted objects/persons were missing in a dataset crucial for autonomous driving

Brad Dwyer found a lot of missing or omitted labels in a set that is used for training and testing autonomous dryving systems. »We did a hand-check of the 15,000 images in the widely used Udacity Dataset 2 and found problems with 4,986 (33%) of them.« Since this is a Open Source Dataset used primarily for educational purposes, but as the author found out obviously also for test cars on public streets, he published a corrected set at https://public.roboflow.ai/object-detection/self-driving-car

Besides the honorable work of Dwyer, these omissions lead to the larger question of the reliablity of many data sets which are being used for training.


Face-recognition respirator masks

Danielle Baskin created a website for computational mapping to convert facial features into an image printed onto the surface of N95 surgical masks without distortion. It is a reaction to the Corona virus epidemic and allows to unlock (aka trick) face id techniques of smart phones.

https://faceidmasks.com



Obfuscation of data through using group accounts

Teenagers have come up with elaborated schemes to share instagram accounts and produce obfuscating data, in order to look at whatever they want to look at without being tracked individually.

»Each time she refreshed the Explore tab, it was a completely different topic, none of which she was interested in. That’s because Mosley wasn’t the only person using this account — it belonged to a group of her friends, at least five of whom could be on at any given time. Maybe they couldn’t hide their data footprints, but they could at least leave hundreds behind to confuse trackers.« Alfred Ng on Cnet.com

Read Full article here: https://www.cnet.com/news/teens-have-figured-out-how-to-mess-with-instagrams-tracking-algorithm/


Paint Your Face Away workshop

Paint Your Face Away is a drop-in digital face painting workshop by Shinji Toya. The development of the digital face painting tool for this session has been inspired by Frank Bowling’s paintings. Participants use the painter to create their profile pictures while running a real-time face detection on the image of a face being painted so that at one point the profile picture stops being detected by the computer vision through the painting process. In this way, the digital paint acts as a type of disruptive noise for the machine.

Read further at https://shinjitoya.com/paint-your-face-away/



Google Maps Traffic Jam


Artist Simon Weckert generates poison data by transporting 99 second hand smartphones in a handcart and generates a virtual traffic jam in Google Maps. Through this activity he shows that it is possible to turn a green street red. This in turn has an impact in the physical world by navigating cars on another route to avoid being stuck in traffic. Simon U Rock!

https://www.simonweckert.com/googlemapshacks.html




older posts