We live in a world where everybody knows everything and nobody trusts anybody.
Remember how it felt to share a public world not as polluted by the clever deceptions and jaded ironies that have come to pervade our lives?
You’ve probably seen one (or a few). Technology is so good and so cheap that deepfakes have become commonplace. The following example is almost two years old.
Convincing? Yes, but the technology has come forward in leaps and bounds in those same two years.
The problem has become so common that experts have begun to voice concerns over the rapid rise of worryingly convincing ‘deep fakes’ being shared on the World Wide Web.
Recently, in fact, Facebook (FB) created its own deepfake videos for a competition called the Deepfake Detection Challenge.
FB says it plans to build a dataset from useful competition entries. Company executives hope people in the AI community will use the dataset to spot technologically manipulated media online and stop them from spreading.
Contents
The ‘deepfake’ pot calling the kettle black?
The company’s stated intent has raised a few eyebrows. After all, FB is hardly a model for corporate responsibility.
In July, the US Federal Trade Commission ordered FB to pay a record $5 billion to settle privacy concerns after allegations that the political consultancy group, Cambridge Analytica, improperly obtained the data of up to 87 million Facebook users. The probe then widened to include other issues such as facial recognition.
“Deep Learning” and “Fake”
Now the tech giant is offering grants and awards in an effort to spur the participation of artificial-intelligence (AI) researchers in its deepfake detection contest. The company says it is investing more than $10 million into the effort.
FB is working with a number of organizations on the competition, including Microsoft, the Massachusetts Institute of Technology, the University of California, Berkeley, and the Partnership on AI, a nonprofit research and policy organization.
Unsure what a ‘deepfake’ is?
The term deepfake is a combination of “deep learning” and “fake.” People who create deepfake videos use AI to realistically portray politicians, celebrities, and other high-profile personalities doing and saying things they did not actually do or say.
Politicians, civic leaders, and government officials are worried that unscrupulous parties might use these kinds of videos to deceive voters in the upcoming US elections.
Not Just Photoshop or iMovie
The social media giant shot its deepfake videos with paid actors who knew they were part of a manipulated-video data set, says the company’s chief technology officer, Mike Schroepfer.
The ultimate goal of the competition, Schroepfer says, is to encourage the construction of an AI system that can look at a video and determine whether it has been altered.
Researchers and a couple of startups are working on this problem. There are a number of methods for finding deepfakes. These include looking at the video for things like out-of-place shadows and strange visual artifacts.
But detecting deepfakes is becoming significantly more difficult as technology improves. In many cases, detection comes down to subtle details like chin movement or a blink of an eye.
Several technologies have converged to make fakery easier. They are all readily accessible.
All it takes is a smartphone, a computer and ‘off the shelf’ software…
For instance, smartphones let anyone capture video footage, and powerful computer graphics tools have become much cheaper.
Of course, photo fakery is far from new. But AI has completely changed the game. Until recently only big-budget movie studios could carry out a video face-swap – and it would probably have cost them millions of dollars.
AI now makes it possible for anyone with a decent computer and some time to spare to do the same thing.
New AI software systems allow things to be distorted, remixed, and synthesized in astonishing ways.
To be sure, AI is not just a better version of Photoshop or iMovie. This new technology lets a computer learn how the world looks and sounds so it can create convincing simulacra.
“Material Effects”
This is why many argue that deepfakes threaten to further distort the line between truth and fiction in politics. Already the internet accelerates and reinforces the dissemination of disinformation through fake social-media accounts.
“Alternative facts” and conspiracy theories are common and widely believed. There is some evidence that fake news stories may have influenced the last US presidential election.
In September, a satirical Italian news show aired a deepfake video featuring a former Italian prime minister heaping insults on other politicians. Most viewers realized it was a lampoon, but a few did not.
The consequences have even turned deadly at times. Deepfake videos sparked ethnic violence in Myanmar and Sri Lanka last year.
The other side of the deepfake coin…
“You can already see a material effect that deepfakes have had,” says Nick Dufour, a Google engineer overseeing the company’s deepfake research. “They have allowed people to claim that video evidence, that would otherwise be very convincing, is a fake.” Without a way to disprove them, it is going to be increasingly difficult for juries to convict perpetrators.
We are apparently already deep into an era when we can’t trust anything, even authentic-looking videos that seem to convey real news.
Will Facebook really help us decide what is credible and whom to trust?