Deepfakes and the race to detect them
Deepfake videos are terrifying.
We’ve seen the havoc caused by fake news, just with regular static social media posts or text based messages on networks like WhatsApp. Imagine when the fake news is coming from a video of someone that you would otherwise believe, like a president / prime minister / CEO etc.
It’s coming and it’s coming soon. None of the leading social networks have systems in place yet to monitor or control these videos .
So the efforts to fight against fake, AI created videos, are looking at developing other AI systems that can recognise whether the video was created by AI or real people.
Videos are harder than many other forms of media to authenticate, as they are easily compressed or reformatted, which changes the basic structure of the file. A normal compression algorithm, that takes a large video and reduces the file size (like when you share a video on Facebook or WhatsApp or YouTube) does this by discarding as much information from the file as possible, without damaging the picture quality. This means that if you try to embed watermarks or other verification techniques, they can be removed easily and innocently.
So, how do you detect a deep fake?
One approach is to look at normal, real human behaviours. People blink at regular intervals for example. If AI studies video and finds patterns that don’t match a range of normal humans, that could be a flag.
For celebrities, you can train an AI system on their unique patterns. Donald Trump moves in a certain way, gestures in a certain way, moves his head while speaking in a certain way. Even if you copy his face onto an actors body, they are unlikely to perfectly match his traits.
There are more ways, like studying the way AI creates new pixels when merging different images, but all up it is going to be a back and forth game for years. Each new development will be met with a counter.
Lets just hope it happens fast enough.