Facebook CEO Mark Zuckerberg gives a chilling address, speaking directly to the camera from behind a desk at his company’s base in Menlo Park, California. “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures,” Zuckerberg says.
The truly sinister part of this public web chat is that Zuckerberg never said those words. The Facebook CEO was deepfaked. The video was real enough, picked from a 2017 publicly streamed speech, but the audio was dubbed by an Israeli startup called Canny AI as an exercise to demonstrate the possible dangers of deepfake technology.
“Deepfakes have been described as a propaganda weapon,” says Bill Bronske, senior solutions architect at Globant, an internet technology and software development company. “I’ve also heard them described as a potential serious threat to democracy, as well as a host of other negative descriptions. Those descriptions can be accurate, but I think we can more effectively describe deepfakes as a tool.”
A deepfake is when video or audio is manipulated in a way that is undetectable to people viewing or listening, so that the result is a piece of media that seems authentic. In the era of deepfakes there are two outcomes for brands: They can become a target, or take control of the creative technology and use it to their advantage.
“The negative part of this technology is using it to put out misinformation,” says Victor Riparbelli, CEO of Synthesia, a video technology firm. “It can be used against brands just like it can be used against people.”
That is especially true as the technology to create deepfakes becomes more and more democratized. People are seeing it in common apps like Snapchat and Instagram, where augmented reality makes video distortion as simple as clicking a button. Face-swapping is considered a benign use of deepfakes, where people can change their appearance or even take on the visage of someone else.
Accounts on YouTube have popped up to share videos of convincing face-swaps for entertainment purposes, like one recent video of “Saturday Night Live” alum Bill Hader doing impressions of Tom Cruise and Seth Rogan on an old Letterman appearance. The video depicts Hader’s face morphing into the celebrities.
Changing how people look is the easy part of deepfake technology. Changing what they say is the true hurdle, Riparbelli says. The Zuckerberg video, while eerie, is not entirely convincing. “If you’re changing what someone is saying it needs to be extremely precise for it to not look really, really weird,” Riparbelli says.
Deepfakes are created through generative adversarial networks, Bronske says. Those are artificial intelligence programs that fine-tune a deepfake video until it passes a threshold to be considered believable. The adversarial network “tests the output of the generative model to say, ‘Nope, nope, nope. Fake, fake, fake,” Bronske says, “until the video starts coming back as valid, real.”
The technology is advancing so rapidly that within the next six months, “there will be no human way of denoting a deepfake from a valid video,” Bronske says.
Tech world giants are combining forces to try to thwart deepfake technology. Google, Facebook, Microsoft and Twitter have all taken the threat seriously. Last month, Google released a database of 3,000 deepfaked videos to public institutions, including academia, to help train systems that detect distorted media.
What brands need to know about benefits and pitfalls of deepfakes:
Deepfakes can offer an upside for brands in personalization. Synthesia recently worked on a “Malaria No More” campaign with soccer legend David Beckham and ad agency R/GA. The campaign deployed deepfake technology to make it look like Beckham can speak nine languages, altering the message depending on the intended audience. Using deepfake technology in this way can help companies drive personalization around the world. Synthesia is developing artificial
intelligence-based marketing programs so that the same technology that allowed Beckham to speak in multiple languages could be applied to any industry. Company CEOs and celebrity endorsers could speak directly to individuals with customized messages, even referring to a consumer by name.
However, the more brands use deepfakes in daily marketing, the more potential there is for trouble. It’s easy to imagine, say, a brand haphazardly borrowing a celebrity’s image for a campaign—a legal no-no. “Informed consent” is an important first principal, Bronske says.
Also, if a brand spreads any form of deepfakes, it needs to be up front about the manipulation. “The counter to deception is transparency,” Bronske says.
There are ways for brands and their CEOs to avoid the Zuckerberg treatment by tagging all media assets they release to the public using blockchain technology, Bronske says. Blockchain can be used as a ledger that authenticates when a piece of media is from the original source, and it tracks any time the original has been altered.
While companies protect themselves against the dark side of deepfakes, they also need to harness their potential for good, Bronske says. “The brand that wins will be the one that pushes this the furthest, fastest,” Bronske says.