real-fake1000

When seeing is NOT believing

“Danger, Will Robinson! Deepfake!”

Antidote to synthetic media: Home robot that hunts deepfakes…and maybe thwarts robocalls, robotexts, bad bots, and creepy people ringing the doorbell

What Are Deepfakes?

 “The term deepfakes comes from a combination of the words “deep learning” and “fakes”. This is because artificial intelligence software trained in image and video synthesis creates
these videos.

“This AI can superimpose the face of one subject (the source) onto a video of another (the target). More advanced forms of the technology can synthesize a completely new model of a person using the source’s facial gestures and images or video of the subject they wish to impersonate.

“The technology can make facial models based upon limited visual data, such as one image. However, the more data the AI has to work off of the more realistic
the result is.”

How comforting and reassuring would it be to watch TV or YouTube videos together with a home robot that could instantaneously spot a deepfake, and then let out a warning?

Bzzzzz! “Danger, deepfake!”

Deepfake encountered and eliminated, with analytics as to its origin and composition instantly available would be a nice set of actions for a robot to handle.

In our new world, where seeing is NO LONGER believing, there’s a massive problem afoot with synthetic media readily fooling our eyes, ears and brains with manufactured content that is beyond our ability to authenticate in the usual way.

Maybe a nice job for a robot to remedy, pitching in with new eyes, ears, and a synthetic brain.

“However exciting it is to have an aware, intelligent, mobile piece of technology to interact with in my home, if it isn’t filling a glaring need in my life, it simply becomes a novelty item.”

—Dor Skuler, CEO & Co-Founder, Intuition Robotics

 

Synthetic world of deepfake scariness
As it is with most scary stuff these days, it’s going to get worse before it gets better.

Next scary upgrade: Synthetic media will soon be manufactured in real time. For example, a Skype interview with a job candidate where the HR interviewer is blown away by the superbly crafted and flawlessly delivered responses from a near-perfect job candidate. A candidate who can manufacture killer interview responses on the fly during the interview. Even facial recognition software is fooled, because the job candidate is real, or at least most of the candidate is real.

Nice job for a robot to remedy, pitching in with new eyes, ears, and a synthetic brain.

Every HR interviewer, just like every home TV viewer, is going to want a robot companion that can spot deepfakes. Of course, no such robot exists; but if one did exist, it’s easy to see that the market for such a machine would be off the charts.

What about deepfakes in the workplace?

“These inauthentic intrusions not only impact our society generally, and our political system and growing divisions more specifically, but also spill into our workplaces in a way that forces employers to grapple with the often inevitable effects.

“Employers will need to adjust to this new reality and understand the means of minimizing the potentially negative impact, including the utilization of data analytics to protect companies and their workforces from exploitative uses of false information.”  —Natalie Pierce, Aaron Crews LITTLER

Robots getting defensive
Although there are seemingly dozens of social robots on the market, they are all, by and large, more of a social nature than anything else; good at ordering a pizza for home delivery or keeping track of Spotify or arranging tomorrow’s meeting agenda or answering trivia questions.

These days, however, maybe something a little more in step with our times like security, safety, and situational awareness would be good capabilities for a robot.

Maybe, in addition to deepfake hunting, a home robot could also foil a few robocalls (U.S. had 26.3 billion robocalls in 2018), or deal with creepy dudes ringing the video doorbell, or thieves grabbing off delivery packages, or even unmask chatroom stalkers.

Artificial intelligence is producing the tools to do the job. Deepfake spotting software exists and it’s getting better, so detection isn’t the problem that it once was, which is a good thing because deepfakes are proliferating, and growing ever more serious

Technical University of Munich (TUM) has developed an algorithm called XceptionNet that quickly spots faked videos. It identifies manipulated video for easy removal.

“Essentially, the algorithm [will run] in the background,” say Matthias Niessner, a professor in the university’s Visual Computing Group. “If it identifies an image or video as manipulated it would give the user a warning.”

In search of ‘glaring need’ and meaning
One persistent knock on home or social robots has long been that there is no compelling reason enough to own one, regardless of price point. The utility factor is just not strong enough for anyone to make them a member of the family.

 

Need to Know More?
Check This Out

As Dor Skuler, CEO & Co-Founder, Intuition Robotics, sums things up: “However exciting it is to have an aware, intelligent, mobile piece of technology to interact with in my home, if it isn’t filling a glaring need in my life, it simply becomes a novelty item.”

Roboticist, Madeline Gannon, adds: “If robots are going to live in our cities, streets, sidewalks, and skies, then they need to be more than useful. They need to be meaningful.”

“Glaring need” and “meaningfulness” then are good operative terms when it comes to a social robot’s ability to turn a for sale sign into a welcomed home companion.

The truth machine
Artificial intelligence (AI) may have just given us the first truly compelling reason to buy a social robot that is both useful and meaningful.

A social or home robot that could detect deepfake video and prevent its human owner from believing that the video is real could well be the long-awaited breakthrough for home use.

AI technology can also identify deepfakes
(Morgan Stanley Research Notes)
Artificial intelligence (AI) can take massive amounts of information and generate new content. While this could have industry-changing implications in terms of efficiency and productivity, it can also be put to nefarious purposes if AI “deepfakes” spread potentially harmful disinformation, indistinguishable from reputable content.

Fortunately, the cause of the problem may also be the source of the cure: The same generative AI that churns out phony videos can also be trained to help separate the real from the fake in a deluge of derivative content. 

“While generative technology is abused to commit fraud, spread fake news and execute cyberattacks on private and public organizations, it can also help AI systems identify and flag deepfakes themselves,” says Ed Stanley, Morgan Stanley’s head of thematic research in Europe. “Software that can achieve this will have an especially important role in the online reality of the future.”

Fighting fire with fire
Deepfakes—digitally manipulated images, audio or video intended to represent real people or situations—aren’t new. But the ease, speed and quality with which they can be created has elevated the urgency to enact safeguards and devise smart countermeasures.

Though clearly doctored celebrity videos were among the first generation of deepfakes, more recent examples reveal two critical shifts: First, today’s deepfakes can be created in real time, which presents problems for businesses whose data security depends on facial or voice biometrics. Second, hyperrealistic facial movements are making AI-created characters indistinguishable from the people they are attempting to mimic. 

 “Traditional cybersecurity software is likely to become increasingly challenged by AI systems,” Stanley says, “so there could be strong investment plays in AI technology directed as training tools to help employees and consumers better decipher misleading versus authentic content.”

For example, some companies specialize in both creating and detecting deepfakes using large, multi-language datasets. Others use troves of data to create deepfake detectors for faces, voices and even aerial imagery, training their models by developing advanced deepfakes and feeding them into the models’ database. Such AI-driven forensics analyze facial features, voices, background noise and other perceptible characteristics, while also mining file metadata to determine if algorithms created it and, in some cases, find links to the source material. 

“Some of the best opportunities seem to lie in applications for safety and content moderation,” says Stanley, “especially as valuations for large-language models and leading players in the space have priced out some participants.”