How to stop AI from recognizing your face in selfies


Fawkes has already been downloaded almost half a million times from the project website. A user has also created a online version, which makes it even easier for people to use (although Wenger does not vouch for third parties using the code, warning: “You don’t know what happens to your data while this person is processing it”). There’s no phone app yet, but there’s nothing stopping someone from creating one, Wenger says.

Fawkes can prevent a new facial recognition system from recognizing you – the next Clearview, for example. But it won’t sabotage existing systems that have already been trained on your unprotected images. However, technology is improving all the time. Wenger believes that a tool developed by Valeriia Cherepanova and her colleagues at the University of Maryland, one of the ICLR teams this week, could solve this problem.

Called Discreet, the tool expands on Fawkes by applying disruption to images based on a stronger adversary attack type, which also fools pre-trained business models. Like Fawkes, LowKey is also available online.

Ma and her colleagues added an even bigger touch. Their approach, which transforms the images into what they call inexplicable examples, effectively makes an AI ignore your selfies altogether. “I think it’s great,” Wenger says. “Fawkes trains a dummy to learn something wrong with you, and this tool trains a dummy to know nothing about you.”

Images of me taken from the web (top) are turned into inexplicable examples (bottom) that a facial recognition system will ignore. (Credit to Daniel Ma, Sarah Monazam Erfani and colleagues)

Unlike Fawkes and his followers, the inexplicable examples are not based on contradictory attacks. Instead of introducing changes to an image that force an AI to make a mistake, Ma’s team is adding tiny changes that cause an AI to ignore it during training. When presented with the image later, her assessment of what is in it will be no better than a random guess.

Non-comprehensible examples may prove to be more effective than adversarial attacks because they cannot be trained against. The more conflicting examples an AI sees, the better it recognizes them. But because Ma and her colleagues prevent an AI from training on images in the first place, they claim that won’t happen with inexplicable examples.

However, Wenger is resigned to an ongoing battle. His team recently noticed that Microsoft Azure’s facial recognition service was no longer being spoofed by some of their images. “He suddenly became robust to the masked images that we had generated,” she says. “We don’t know what happened.”

Microsoft may have changed their algorithm, or the AI ​​may just have seen so many images of people using Fawkes that it has learned to recognize them. Either way, the Wenger team released an update to their tool last week that works against Azure again. “It’s another cat-and-mouse arms race,” she says.

For Wenger, this is the history of the Internet. “Companies like Clearview capitalize on what they perceive to be freely available data and use it to do whatever they want,” she says.

Regulation could help in the long run, but it won’t stop companies from exploiting loopholes. “There will always be a mismatch between what is legally acceptable and what people actually want,” she says. “Tools like Fawkes fill this gap.”

“Let’s give people a power they didn’t have before,” she says.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *