By using open-source software, deepfakes become more accessible, cheaper and more convincing. As an example of how easy it is to get a fake image of any person on the web, FireEye experts have created fake photos and audio recordings of Tom Hanks. All the tools for this are already in the public domain, they do not require in-depth knowledge and serious expenses. With source code and less than $ 100, the researcher was able to create believable images and audio of actor Tom Hanks.
Philip Tully, a data scientist at network security company FireEye, generated a fake Hanks to test how easily open source software can be adapted for disinformation attacks. His conclusion: “People with little experience can use these machine learning models and do pretty effective things with them.”
Of course, if you look closely, you can see from the photo in full resolution that this is a fake. For example, neck creases and skin texture look unnatural. But in general, these images fairly accurately reproduce the features of the actor’s face – frowning eyebrows and the cold gaze of gray-green eyes. This is enough for them to be mistaken for real photos of Tom on social networks.
To create a fake photo of Tully, all they had to do was collect a few hundred Hanks images from the Internet and spend less than $ 100 on renting an image-generating program – an Nvidia system released last year.
Tully also used other open source AI software to try and mimic the actor’s voice from the three YouTube clips, but with a less impressive result.
By demonstrating how cheaply and easily a person can create acceptable fake photographs, the FireEye project could heighten fears that online misinformation could be amplified by artificial intelligence technology that generates acceptable images or speech. Indeed, while most of these products are of low quality and serve entertainment or pornographic purposes. But the programs at the disposal of corporations are capable of producing much better results.
According to Lee Foster of the same network manipulation analysis company, deepfakes don’t have to be of high quality in order to function. Hanks’ fake photograph can be compelling enough in a world where people consume a lot of information without looking at the images too much. “If you scroll quickly through your Twitter feed, you don’t have to scrutinize your profile pictures,” he says.
Making complex deepfake videos takes a lot of time and experience. Tim Hwang, a researcher at the Center for Security and Emerging Technologies in Georgetown, prepared a report that concludes that deepfakes are not a serious and imminent threat, but society should invest in protection anyway.