toast1000

Relax, Facial Recognition Is Toast. Seriously

Algorithm that thwarts facial recognition algorithms might be just a beginning; one that may even point to a real job for household robots to take on

One for the good guys n’ gals
Algorithm that swaps out or distorts a few pixels completely fools—100 percent of the time—the facial recognition systems from Microsoft, Amazon, Google, and Clearview.ai.  

As momentous as that sounds, it just might be the harbinger of even better tech to come. Like spotting deep fakes, for instance. What if you possessed your own personal tech that kept you safe from facial recognition, deepfakes, digital scams, robocalls, COVID-like viruses, Tinder-fakes, callous job interviewers, general dishonesty (hmmm), and sketchy-looking dudes hanging out near your car?

You’d pay for that wouldn’t you? Most anyone would.

Stuff all those capabilities into a household robot, a smartphone, a smart watch, or all three, and such personal protection might just be the next big industry. A bunch of people-friendly algorithms to ride shotgun with you through life. Sweet.

A recent Brookings article foolishly asks: Who thought it was a good idea to have facial recognition software? The author writes: “It would oblige the developers and users of [facial recognition] technology to explain exactly why they think it’s a good idea to create something with that level of power.” To find an answer to that, just ask the powerful. Prior to Edward Snowden, probably none of us would even be aware to the extent that governments go to “better know” their citizenry.

To ask who thinks facial recognition is a good idea would be asking for a mighty long list. How to counter facial recognition, or, for that matter, any intrusion into an individual’s privacy might be time better spent.

A team from the University of Chicago’s SAND Lab has just created an anti-facial recognition algorithm that will free a face from scrutiny. Of course, the downside is that bad guys will also reach for such a cloaking device.

Here’s something that a household robot could do to become a valued member of most any household. It’s the kind of value that people would willingly fork out big money to acquire. Add to a household robot’s skill set the ability to readily spot deepfakes, digital scams, robocalls, COVID-like viruses, and Tinder-fakes, and that’s a super family member to have around.

See related anti-deepfake robot:
When seeing is NOT believing
“Danger, Will Robinson! Deepfake!”
Antidote to synthetic media: Home robot that hunts deepfakes…and maybe
thwarts robocalls, robotexts, bad bots, and creepy people ringing the doorbell

Here’s a piece of  Gizmodo’s excellent look at Fawkes

In 2020, it’s worth assuming that every status update and selfie you upload online can eventually make its way into the hands of an obscure data-mining third party, into the hands of national authorities, or both.

On the flip side, being aware of exactly how shitty these companies are has prompted a lot of folks to come up with new, creative ways to slip out of this sort of surveillance. And while some of these methods—like, say, wearing masks or loading up on face paint could theoretically keep your photos from being pilfered, you’re also left with photos that don’t look much like you at all. But now, a team from the University of Chicago has come up with a much subtler tactic that still effectively fights back against these sorts of snooping algorithms.

Called “Fawkes”—an homage to the Guy Fawkes mask that’s become somewhat synonymous with the aptly named online collective Anonymous—the Chicago team initially started working on the system at the tail end of last year as a way to thwart companies like Clearview AI that compile their face-filled databases by scraping public posts.

“It is our belief that Clearview.ai is likely only the (rather large) tip of the iceberg,” the team wrote. “If we can reduce the accuracy of these models to make them untrustworthy, or force the model’s owners to pay significant per-person costs to maintain accuracy, then we would have largely succeeded.”

 

SAND Lab "Toast Makers"
Emily Wenger and Shawn Shan

See, when a facial recognition company like Clearview is trained to recognize a given person’s appearance, that recognition happens by connecting one picture of a face (i.e. from a Facebook profile) to another picture of a face (i.e., from a passport photo), and finding similarities between the two photos. According to the Chicago team, this doesn’t only mean finding matching facial geometry or matching hair color or matching moles, but it also means picking up on invisible relationships between the pixels that make up a computer-generated picture of that face.

By swapping out or distorting some of these pixels, the face might still be recognizable to you or me, but it would register as an entirely different person to just about every popular facial recognition algo. According to the team’s research, this “cloaking” technique managed to fool the facial recognition systems peddled by Microsoft, Amazon, and Google 100% of the time.

If you want to give this algo a whirl yourself, the good news is that the U. Chicago team has the Fawkes program freely available for download on their website. If you have a picture you want to protect from snoopers or scrapers, you can load them into Fawkes, which then jumbles those unseen pixels in about 40 seconds per photograph, according to the researchers.

You can then upload the new, secretly scrambled photo to your social media platform of choice with the knowledge that if, say, a company like Clearview comes scraping through your public photos, then this particular picture likely won’t be connected to any of your other digital details online.

Granted, the Fawkes program isn’t supposed to be a silver bullet against these companies or any others. Rather, it’s supposed to be a pain in the ass for the companies involved. “Fawkes is designed to significantly raise the costs of building and maintaining accurate models for large-scale facial recognition,” they write, pointing out that any of us would be more capable of “identifying [a] target person in equal or less time” using our own two eyes instead of using facial recognition software.

Fawkes Group adds a bit on DNNs

The bane of Deep Neural Networks (DNNs)

Fawkes is also the bane of Deep Neural Networks or DNNs. “The achilles heel for DNNs has been this phenomenon called adversarial examples, small tweaks in inputs that can produce massive differences in how DNNs classify the input. These adversarial examples have been recognized since 2014 (here‘s one of the first papers on the topic), and numerous defenses have been proposed over the years since (and some of them are from our lab).

“Turns out they are extremely difficult to remove, and in a way are a fundamental consequence of the imperfect training of DNNs. There have been multiple PhD dissertations written already on the subject, but suffice it to say, this is a fundamentally difficult thing to remove, and many in the research area accept it now as a necessary evil for DNNs.”

Want to download it and test drive it yourself? Try here: Fawkes