After a group of deputies of the Riigikogu held a video meeting with Leonid Volkov, the head of the headquarters of the opposition Russian politician Alexei Navalny, who turned out to be fake, they began to actively discuss the technology in Estonia – Deepfake, which realistically replaces the faces and voices of people.
Deputy of the Riigikogu Marko Mihkelson, after meeting with the pseudo-Volkov, confidently stated that the substitution was made by replacing the image, and that it was an operation of the Russian special services. “We must take such cyberattacks very seriously and understand that this is one of the many tools used by the Russian intelligence services,” Mikhkelson said.
First they recognized faces, now they substitute
Deepfake technology is rooted in facial recognition technology, the development of which began 40 years ago, and has been widely used since the second half of 2010. In 2017, for the sake of curiosity, enthusiasts decided to combine two neural networks: recognizing and creating faces. This is how Deepfake was born.
“Like any tool, Deepfake technology can be used for good or bad. Only human imagination is the limit here, ”says Siim Kumpas, Advisor to the Strategic Communication Department of the Estonian government.
Special services tried
Perhaps the deputies of Estonia, Latvia and Lithuania were really deceived by the special services with the help of new technology. But still it is impossible to be categorical in this matter, because today anyone can get access to the technology of changing faces.
“Since this technology is open source, it is easy to download. Moreover, it is possible to launch the technology without having powerful equipment, simply sending the video to the Google cloud, ”explained Alexander Tavgen, an Internet security expert.
Changing someone else’s face is not easy
In words, everything looks simple, but in reality, everything is a little different. “At the moment, the technology is still at the development stage and requires quite large resources. In principle, it is not yet available to an ordinary person on his home computer. Although there are already services where you can use the power of a large network of computers and generate video images for money, ”said Anton Keks.
Insight tried to overlay its own 30-second video onto the video of former Prime Minister Jüri Ratas. The neural network on the deepfakesweb site in five hours was able to very conditionally recreate the face of a famous person. You can’t fool anyone with this video. Therefore, there are great doubts that during the video communication between the deputies and the false Volkov, the Deepfake technology was used. Including because the neural network still cannot change faces in real time.
According to Alexander Tavgen, now the technology has not reached such a quality as to deceive a large number of people in real time.
Siim Kumpas is also not sure that the deputies were deceived with the help of a neural network. “It could have been both. But this example illustrates that we weren’t paying attention to Deepfake in the context of video conferencing or meetings. The danger lies in the fact that it is possible to create such a reality when someone speaks so convincingly that real events can follow – some kind of disorder or a certain choice during the vote, or something else, ”Kumpas is sure.
Look at the ears and beard
In case of any doubt, one should look at the boundaries of the face. “Most often, there are some distortions where the hair, beard begins, in the neck and ears. The neural network reconstructs the face, but does not change the hairstyle, hair color, ear shape, and so on, ”explained Anton Keks.
By the way, his words are confirmed by the video, which was obtained by the editor of “Insight”. He managed to “try on” the face of Yuri Ratas, but the program could not change the hairstyle and proportions of the original.
In order for a neural network to reliably display a person, it must very well study his face, facial expressions, and head turns. And this requires hundreds of thousands of images of the original.
For example, at the moment, the neural network has created millions of photographs of non-existent people, based on hundreds of millions of photographs of real people posted on the Internet. Nevertheless, distortion can still be detected with the naked eye in these static photographs.
Rebirth of deceased actors
In addition to practical jokes or intelligence, this technology has other uses. Remember Paul Walker driving off into the sunset at the end of the Furious 7 franchise? By that time, the actor had already died in a car accident, so his face was recreated using computer graphics. “In principle, one of the options where you can use Deepfake is to revive, for example, old and loved ones,” Anton Keks is sure.
Moreover, according to Alexander Tavgen, in the foreseeable future the neural network will be able to independently create films based on adapted scripts, only the film will be new every time.
Scammers do not sleep
Naturally, the new technology was not ignored by fraudsters either. According to Siim Kumpas, two years ago in Great Britain more than 200,000 euros were deceived from one company. When a prospective business partner called an employee of the company and said that a certain amount of money should be transferred to him quickly, this call was so convincing that the scam was pulled off.
“Now many services have moved online and, in principle, you very often talk over the Internet with someone instead of personal communication. Now it is much less likely that you are talking to the person with whom you think you are talking, ”Anton Keks described the frightening present and gloomy future.
But the devil is not so terrible as he is painted. For example, in case of doubt, it is enough just to call the interlocutor using a regular telephone connection. It is unlikely that scammers will be able to answer such a call.
In addition to such a simple method of verification, many countries are discussing the need for legislation to protect people from Deepfake, and programmers are also creating neural networks that counteract and detect false images.
Based on materials from ERR. By by Arthur Tooman