Using Photoshop to alter images to make the subject look thinner, healthier or more attractive, has become so endemic that the word has entered the lexicon as a verb, adjective and noun. Advancements in AI, specifically deep learning using artificial neural networks, have taken this activity to a potentially more harmful level. Last season’s liposuction of bimbo belly fat has become this season’s complete head transplants of politicians and celebrities.
Replacing the face or head on an image or video using AI creates what is known as a deepfake – the term a concatenation of “deep” (from “deep learning”) plus “fake” (from “fake news”). While relatively innocuous if the result is humorous and the creation obviously fake, it has more serious implications if the resulting image or video is convincingly authentic and is created to mislead.
Simple deepfakes have the subject mouthing words to a fake soundtrack. AI is used to manipulate the video so that the speaker’s mouth corresponds with the fake words. The Face2Face application is an example of one used for this purpose, and an explanation how it was used to manipulate a video of President Obama is here.
The existence of video footage used to be reliable evidence of an event having occurred. Deepfakes now cast doubt on all video or images – they can no longer be accepted at face value as representing reality. The converse also applies. The credibility of deepfakes also allow those actually caught on camera in compromising situations to falsely claim that the image/video is fake, providing the guilty with plausible deniability. AI brings uncertainty to reality.
Deepfakes can be used in disinformation campaigns during elections or to harm a political incumbent. They are also a tool for blackmail or to conduct fraud by impersonating for financial gain. Just last week, one of our staff contacted me to say they had received an email purporting to come from me, requesting them to purchase money cards and send them somewhere. Although the staff member was initially convinced as to the authenticity of the request, thankfully they did eventually realise it was a confidence trick before taking any costly action. The incident was a reminder how convincing these social engineering scams can be. Audio deepfakes have been used to add credence to similar scams, such as the example described here.
Advanced deepfakes are created using a technique called generative adversarial networks (GANs). This pits a neural network against itself. The AI system generates the fake image or video, and then attempts to determine whether the image/video has been generated by a neural network. It uses this information to regenerate an improved image/video and repeats the cycle indefinitely, iteratively refining and tweaking until the system can no longer determine that it has been generated by a neural network i.e. until the image/video is good enough that it can no longer spot that it is a fake.
The team at Nvidia developed a technique known as Progressive GANs, capable of generating fake faces – convincing but completely new faces which don’t look exactly like anyone real. The technique and examples are here.
GANs enable the generation of highly convincing images/videos and imply that neural networks will eventually be incapable of identifying fakes, nullifying neural networks as a defensive identification method. Although Facebook is using AI to identify deepfakes on their site, GANs will ensure that their AI will only identify the poorly trained ones. We are fast approaching the point where AI will not be an effective detection method if an attacker is prepared to spend the time and effort using GAN methods.
If AI cannot defend against GAN-developed deepfakes, perhaps digitally signing all images and videos by the device they were created on could ensure integrity. Embedding digital signatures at various intervals of a video could protect against alterations. Twitter, Reddit and Google have taken measures to prevent deepfakes being shared through their platforms, but they do have the limitations described.
Deepfake techniques can also be used to sabotage autonomous vehicles. Researchers at Google found that by manipulating a relatively small number of pixels in a photo of an elephant, they could fool a neural network into concluding the image was of a car, even though it still looked like an elephant to a human. The specific pixels needed to be altered to change the conclusion depends upon the training of the neural network. As autonomous cars are trained to recognize objects encountered on the road, by sabotaging these images an attacker could produce identification errors which have profound consequences. This is one of the many issues that will need to be addressed to ensure safety and security of autonomous vehicles.