AI system learns in conversation and talks like a human

LaMDA answers questions by “pretending” to be the dwarf planet Pluto

Google announced a new AI system that learns from a conversation and starts a conversation indistinguishable from a person. The new model was named LaMDA (Language Model for Dialogue Applications). Google called the new system “a revolutionary communication technology” that will allow users to more naturally and openly engage in dialogue with AI assistants. Any conversation can begin with a discussion of a concert, and end with a criticism of the country’s political structure. They are also unable to predict such a turn of the car.

LaMDA “is meant to be about any topic,” said Google CEO Sundar Pichai, explaining that the model is still in development but is already being used internally.

In its preview, Google showed a video of two conversations with the model, where LaMDA answered questions, “pretending” to be a dwarf planet Pluto, as well as a paper airplane. The model was able to cite real events and facts on two topics, such as the visit of the New Horizons probe to Pluto in 2015. When asked what the dwarf planet would like to tell people about itself, Pluto replied: “I want people to know that I am not just a random ball of ice. In fact, I am a beautiful planet.”


For example, LaMDA knows a little about the planet Pluto. Therefore, if a student wanted to learn more about space, he could ask about Pluto, and the model would give reasonable answers, making learning even more fun and exciting. If that student then wanted to switch to another topic – say how to make a good paper airplane – LaMDA could continue the conversation without any retraining.


Google is looking into the model’s “interestingness” (by judging the shrewdness, surprise, and / or witty responses) and is working to ensure that LaMDA meets standards for fairness, accuracy, security, and confidentiality. Pichai did not provide details of the data used for training, prohibited topics, or specific precautions being taken by the company.

In the future, Google plans to build this technology into products such as Search, Google Assistant and Workspace, as well as make it available to third-party developers and corporate customers.

How LaMDA was created

The department has been working on the technology for many years. Like the recent BERT and GPT-3 speech processing models, it is built on transformers, a neural network architecture that Google created and made publicly available in 2017. The neural network creates a model that can read and understand a long chain of words and sentences (paragraph or section), determine how words are related to each other, and predict which words will appear next in the text.

Unlike other models, LaMDA was trained on dialogues. A special emphasis in training was placed on the choice of open dialogues. In particular, the model learned to determine how well the answer fits the previous phrase or question. For example:

– I went to a guitar course yesterday.
– Super! My mom has an old Martin guitar and she loves to play it.

This is an apt response to a cue. But relevance isn’t the only factor in a good answer. Answers such as “good”, “I don’t know”, “maybe” are also appropriate for a large number of completely different questions and remarks. Satisfactory answers are clearly related to the context of the replica. In the example above, the answer is relevant and satisfactory.

At the heart of the creation of LaMDA technology is an earlier study by Google, proving that models trained on dialogues can support almost any dialogue. LaMDA is capable of learning to provide relevant and satisfying answers.

Further development of LaMDA technology

Google wouldn’t be Google if it didn’t try to refine the model. It is not interesting to just take and create another technology for processing one of the aspects of speech (such as dialogue). The experts continue to work to increase the “interestingness” of the answers, and also assess how the answers are based on facts. The goal is to teach the model to give a reasoned and actually correct answer.

The second critical task of the team is to exclude the possibility of abuse of such models: spreading prejudices, propaganda of hatred, distortion of facts. Even considering that the model is trained on moderated speech samples, this does not exclude the possibility of using it for personal gain. For developers using speech models and AI, Google has created a separate resource that contains all the tools and bases used for learning.

Leave a Reply

Your email address will not be published. Required fields are marked *