It has always been good advice to take what you see on the internet with a pinch of salt, but online video has lately become even less trustworthy. Deepfakes, clips altered or fabricated with an artificial intelligence technique called machine learning, make alternative realities easier to create and disseminate.
In the video above, Sam Gregory, a program director at nonprofit Witness, which promotes the use of video to defend human rights, tells WIRED that we should prepare to see a lot more deepfakes. Not all of them will be friendly—and there won’t immediately be a technical solution to identify and block them, as with spam email. “We’re going to get more and more of this content and it’s probably going to get of better quality,” Gregory says.
Most deepfake videos circulating online are pornographic and some have been used to harass or discredit women journalists and activists, says Gregory. US politicians have warned deepfakes could undermine elections. Others offer G-rated hijinks, like the YouTube videos showing Nicolas Cage starring in roles that he never played.
That variety of uses means that people should adjust how they think about video in the deepfakes era, Gregory says. Even if technology could accurately flag fakes—so far, none can—the context of a clip is crucial. A perfectly fake president could be political chicanery, or high-production-quality satire.
Keeping deepfakes fun, not fearsome will come down to human psychology. “I don’t think that it’s the end of truth,” Gregory says, pointing out that images are already widely understood to be fake-able. “We have to be skeptical viewers [and] build the media literacy that will deal with this latest generation of manipulation.”