Join Blinkist to get the key ideas from
Get the key ideas from
Get the key ideas from
Deepfakes and the Infocalypse
What You Urgently Need To Know
- Read in 13 minutes
- Audio & text available
- Contains 8 key ideas
Deepfakes and the Infocalypse (2020) is an urgent warning about the dangers posed by fake – but extremely realistic – audiovisual material called deepfakes. They are powered by artificial intelligence, and scammers and hackers are already using them to defraud businesses and harass individuals. Governments are joining in, as well; the use of deepfakes for propaganda is growing. We need to actively prepare for a time when deepfakes become commonplace. If we don’t, we’ll barrel headfirst into an information apocalypse.
Key idea 1 of 8
Photo, video, and audio manipulation have become easy thanks to AI.
In the nineteenth century, humans invented photography. For the first time, we could capture a true, seemingly incontrovertible slice of reality. But very soon, it became clear that “reality” could be not only captured, but also manipulated.
At first, altering photos was a painstaking process. Over time, though, it became simpler. Now, anyone can do it – just download a free app. As a result, we’ve become used to the idea that photos can be altered and know to look out for any retouching or editing.
But aren’t audio and video different? Surely they can’t be convincingly faked, right? In fact, new developments in artificial intelligence confirm quite the opposite.
The key message here is: Photo, video, and audio manipulation have become easy thanks to AI.
AI - or artificial intelligence - is software that processes information through deep learning. It enables AI to make decisions autonomously, based on what it’s “learned” after crunching large amounts of data. The term “deepfake” is derived from this “deep learning,” plus – for obvious reasons – the word “fake.”
The first deepfakes showed how AI can swap a person’s face into an existing video. They were posted on the website Reddit by an anonymous user.
Before long, they were attracting some worrying attention. In late 2017, a journalist named Samantha Cole published an article called “AI-Assisted Fake Porn is Here, and We’re All Fucked.” Her story warned of a Reddit forum full of deepfake porn. Its founder used AI to swap the faces of Hollywood celebrities onto the bodies of porn stars.
Deepfake porn is non-consensual, deeply embarrassing, and demeaning. And it doesn’t matter how rich you are – there’s nothing you can do to wipe it off the internet. Even Scarlett Johansson, the highest-paid actress in Hollywood, couldn’t protect her own name from it.
The fake porn forum on Reddit was eventually taken down. But its creator shared the code that he’d used to make the deepfakes. Now, there’s a whole suite of free tools and software out there, open to anyone who wants to produce their own deepfakes.
Sounds horrifying, doesn’t it? But this is all just the tip of the iceberg. Deepfake technology is continuing to improve. Soon, it may become literally impossible to tell when an image, video, or audio clip is fake. This technology is already leading us down a dark path of mis- and disinformation.