News You Can Use

Researchers at the University of Chicago have a project named Fawkes, which poisons images so that AI cannot be trained by scraping them from public websites while the images remain nearly unchanged to human eyes.

I’m thinking that Imgur should offer this as a filter:

Researchers at the University of Chicago’s Sand Lab have developed a technique for tweaking photos of people so that they sabotage facial-recognition systems.

The project, named Fawkes in reference to the mask in the V for Vendetta graphic novel and film depicting 16th century failed assassin Guy Fawkes, is described in a paper scheduled for presentation in August at the USENIX Security Symposium 2020.

Fawkes consists of software that runs an algorithm designed to “cloak” photos so they mistrain facial recognition systems, rendering them ineffective at identifying the depicted person. These “cloaks,” which AI researchers refer to as perturbations, are claimed to be robust enough to survive subsequent blurring and image compression.

The paper [PDF], titled, “Fawkes: Protecting Privacy against Unauthorized Deep Learning Models,” is co-authored by Shawn Shan, Emily Wenger, Jiayun Zhang, Huiying Li, Haitao Zheng, and Ben Zhao, all with the University of Chicago.

………

The boffins claim their pixel scrambling scheme provides greater than 95 per cent protection, regardless of whether facial recognition systems get trained via transfer learning or from scratch. They also say it provides about 80 per cent protection when clean, “uncloaked” images leak and get added to the training mix alongside altered snapshots.

They claim 100 per cent success at avoiding facial recognition matches using Microsoft’s Azure Face API, Amazon Rekognition, and Face++. Their tests involve cloaking a set of face photos and providing them as training data, then running uncloaked test images of the same person against the mistrained model.

………

The researchers have posted their Python code on GitHub, with instructions for users of Linux, macOS, and Windows. Interested individuals may wish to try cloaking publicly posted pictures of themselves so that if the snaps get scraped and used to train to a facial recognition system – as Clearview AI is said to have done – the pictures won’t be useful for identifying the people they depict. 

If someone comes up with a simple tool, it should be used on every social social media post.

Leave a Reply