Microsoft has patented a chat bot that imitates the identity of any person based on open data from social networks: records, photos, videos, messages, and so on. Will there be a demand for virtual communication with the twins of the dead and does this technology violate our rights?
According to The Independent, the chatbot will be based on artificial intelligence that will use a person’s profile, including, but not limited to, images, voice data, social media posts and emails. Thus, the chatbot will be able to simulate a conversation using voice commands and text chat.
A patent for creating an individual image of a person was filed back in 2017, but was approved only in December 2020. It details how information left by a person on the Web can be used to create a digital model of his personality on a computer or mobile application. According to the patent application, the image is created from posts on social networks, chats on websites, emails, photos and videos posted on Youtube and other platforms. Apparently, the technology allows to recreate not only typical speech and demeanor for text communication, but also “resurrect” the visual appearance and voice.
But Microsoft decided not to stop there: the company is considering the possibility of creating two-dimensional and three-dimensional models of a person based on his images and videos with him. This will help endow the chatbot with the characteristics and behavior of the person, based on the information about which it will be created.
Experts understand that Microsoft did not intend to entertain users who have lost any of their relatives.
Perhaps such a service will enjoy some popularity, but in fact the company is going to build its own virtual assistants, digital assistants and artificial intelligence systems using this technology.
There are already jokes on the Internet that Microsoft’s PR specialists had to invent a whole line of funeral services just to hide another project for collecting user data: “If you used to meet a former classmate at the checkout counter at a supermarket, you will soon find that your late uncle is selling Windows operating system “.
Many people immediately started talking about the fact that from now on, the concept of “identity theft”, which we had previously encountered only in cinema, was becoming relevant. In fact, a digital “cast” from a real person may turn out to be indistinguishable from its prototype, with its help it will be possible to access various services, encrypted data, bank accounts, and so on.
AI company manager Tim O’Brien admitted that he was worried about the consequences of using Black Mirror-style technology. But the software giant is not going to give up the unique technology.
From a legal point of view, this is generally a complete mess: who owns the disclosed data in the form of records and pictures on social networks? Do developers have the right to use them for their own purposes? Who owns the digital clone, especially if the real person no longer exists? Do the relatives of the “source” have the right to sue the “clone user”? And the most interesting thing: if a digital avatar is a copy of a specific person, do the rights of that person actually apply to it? There are many more questions than answers.
Microsoft has previously dealt with chatbots, including self-learning ones. In 2017, the company released the Zo app that anyone could talk to. In one of the conversations, the bot criticized the Koran, calling the book “unnecessarily cruel.” As for bin Laden, he said that this was the result of “many years of intelligence gathering under the leadership of several administrations.”
In 2016, wrote about the Luka messenger, in which we added a chatbot based on the data of a dead person. To do this, the developers used all available information about Roman on social networks.
A year earlier, it was reported that developers from South Korea created a digital image of the deceased girl.