Can deepfakes be detected in videos

With enough stock images, it’s not that hard to turn your neighbor into a famous politician. It wasn’t that hard to spot deepfakes at first – even the best of them had visual problems like blur, distortion, and supernatural facial features that made them seem unrealistic.

However, this is a cat and mouse game, and once we learned one method of detecting deepfakes, the next generation corrected the error. Are there any reliable solutions to find out which videos are trying to trick us?

Deepfake – visual cues

Artifacts are not just things that Indiana Jones puts in museums – they are also small deviations left after image or video processing. In early deepfakes, they could often be caught by the human eye, and bad deepfakes can still have a few warning signs, such as blurred edges, overly smoothed face, double eyebrows, glitches, or generally a feeling of “unnatural facial behavior”.

However, these techniques have now been improved to the point where artifacts are only visible to other algorithms that combine video data and examine things at the pixel level. Some of them can be quite creative, for example, one technique checks if the direction of the nose matches the direction of the face. The difference is too subtle for people to notice, but machines have proven to do a pretty good job of it.

Biometric prompts

For a while, it seemed that the key to exposing deepfakes was their lack of natural blinking due to the relative rarity of the original images with their eyes closed. However, it didn’t take long for the next generation of deepfake technology to “learn to blink,” which quickly reduced the effectiveness of this technique.

Other biometric indicators have not yet been fully overcome, such as personality traits that algorithms cannot easily transform because they require some contextual understanding. Small habits, such as blinking quickly when you are surprised, or raising your eyebrows when you ask a question, can be picked up and used by a deepfake, but not necessarily at the right time, as they cannot (yet) automatically determine when to apply these movements …

AI capable of reading heartbeats using video images has many applications besides detecting deepfakes, but looking for periodic movements and color changes that signal heart rate can help identify imposters created.

AI projects

Many big names are very interested in solving the deepfake problem. Facebook, Google, Massachusetts Institute of Technology, Oxford, Berkeley and a host of other startups and researchers are addressing this problem by training artificial intelligence to detect fake videos, including the methods listed above.

Of course, this only works up to a point. As long as they continue to create deepfakes using state-of-the-art technology, there will always be a small lag between detecting the latest deepfake tricks and the ability of those algorithms to catch them.

Authentication is an important goal of deepfakes

Detection technologies are not the complete answer to deepfakes, as they will likely never be 100% successful. Deepfakes that have invested time and money will be able to pass many sniffing tests and trick AI. Let’s also remember how the Internet works: even if these fakes are discovered, they will still be recycled and some group of people will believe in them.

This is why it is also important to have some sort of verification mechanism – some kind of proof of which video is the original, or something that can indicate if the video has been altered. This is what companies like Factom, Ambervideo, and Axiom do, encoding video data into immutable blockchains.

The main idea behind many of these projects is that the data contained in a video file or generated by a specific camera can be used to create a unique caption that will change if the video is modified. After all, videos uploaded to social media can have an authentication code generated that the original uploader can register on the blockchain to prove they are the original owners of the video.

These solutions, of course, have their own set of problems, such as video encoding that alters the data in the file and changing the signature without actually altering the video content, or legal video editing that distorts the signature. However, in high-stakes situations such as commercial transactions where images are used to verify delivery or gain investor support, having this level of authentication can help prevent deepfake fraud.

Are deepfakes more dangerous than photoshop?

At this point, we are all just assuming that the images may not be real, because we are fully aware that there is technology to make almost everything look realistic on a still image.

Eventually, we can start to approach videos with the same skepticism, as faking them becomes as simple and convincing as editing an image in Photoshop.

However, even with general awareness, it is easy to imagine many real-life incidents starting with a timely high-quality deepfake in the not too distant future.

Based on materials from WebZnam.

Leave a Reply

Your email address will not be published. Required fields are marked *