Cyber fraudsters will speak with your voices

Anti-fraud experts in financial institutions are seriously concerned about the emergence of a new social engineering technique – voice forgery. Technology allows fraudsters, having a voice sample (at least a few minutes of speech), to record absolutely any text and use it for their own purposes. Last year, there was an illustrative case when the director of a division of a large energy company was allegedly called by the head of the head office and asked to pay the bill. The damage amounted to hundreds of thousands of euros. Cybersecurity specialists of the Informzashita company believe that this is only the beginning. The technology is becoming popular and accessible and will be a new “favorite” way to deceive people, which means it is necessary to develop a methodology for detecting and preventing such types of threats.

Solutions for editing and modifying voice files have been known for a long time. Back in 2016, programs appeared on the market that could not only edit the recording of a voice message, but also synthesize new words and phrases based on several minutes of a person’s speech simply by typing. The well-known company Adobe, for example, has a product that makes the voice almost indistinguishable from the original.

A younger technology, which has every chance of becoming a new sophisticated way of deception, is the creation of deepfake videos. In the era of ubiquitous remote work and zoom-communication, fake video from fun is becoming dangerous tools of manipulation and deception.

Deepfakes use powerful machine learning and artificial intelligence techniques to create visual and audio content. With the development of technologies and capacities, they become less and less distinguishable from the original, which makes it difficult to quickly identify a fake. In addition, database leaks also play in the favor of scammers, which allow attackers to make targeted calls and more easily gain trust in potential victims.

The counterfeit detection algorithm is complex and currently requires technical knowledge and skills. In 2019, cloud resources appeared that can “check” the video for the presence of deformations, pixelization and other “impurities” in the file and, with some degree of probability, say whether the file is original or not. To do this, you need to have a recording of the conversation (or video) and the ability to check it using a special program. This means that in real time it is almost impossible for an ordinary person to determine whether a real interlocutor is on the other end of the line or not.

How can you protect yourself? As the anti-fraud experts of Informzashita explained, the most common goal of scammers is money, so any requests from an unknown number to transfer money to an unknown card should be carefully checked, even if you hear the voice of a loved one. In the case of video, there are programs for recording video meetings and resources for verifying the authenticity and purity of the image.

As you know, fraudsters use large resources and funds to develop technologies, and this is always an additional motivation for developers not to stand still. Smartphones are already using machine learning and artificial intelligence, and theoretically it is possible to create an application to automatically detect deepfakes. This is not an easy or fast task, but computer equipment manufacturers should think in this direction.

Based on materials from SK PRESS

Leave a Reply

Your email address will not be published. Required fields are marked *