OpenAI presented GPT-3 in mid-2020, a language system that is unmatched in terms of power. It was covered by major media outlets and revolutionized the world. (I wrote an overview of GPT-3 for Towards Data Science if you want to check it out.) It can create fiction, poetry, music, code, and many other amazing things.
Others big tech companies were not expected to lag behind. Google executives have presented some recent research and technologies at the I/O annual conference some days ago. There was one that caught the audience’s attention: LaMDA, an AI capable of human-like conversations.
The purpose of this article is to review what we currently know about how the technology works.
Conversational artificial intelligence – LaMDA
In 2017, Google released the free, open source transformer architecture that is also based on previous models such as BERT and GPT-3. By focusing on the relation between previous words, the model can predict text (attention mechanism).
The LaMDA model is thus similar to those of other models. However, there’s an important difference between this system and other chatbots. Using LaMDA, you can ensure that conversations remain open-ended. The characteristic chaotic nature of human conversations is explained in a blog post by VP Eli Collins and Senior Research Director Zoubin Ghahramani. Within minutes, we may be talking about a completely different subject. Conversations are often based on how we connect topics unexpectedly.
With LaMDA, these situations will be totally transformed, revolutionizing chatbot technology. These characteristics would enable a chatbot to have a perfect conversation with a human. Information could be obtained more naturally by asking others or using the internet.
There is a lot of substance, specificity, interesting facts, and logic in LaMDA.
A conversational technology, LaMDA was similarly trained as its predecessor, Meena, presented by Google in 2020. The chatbot Meena was able to converse about virtually any topic. Training objectives included minimizing a measure of perplexity, the degree of confidence in predicting the next token a model has. SSA – sensibleness and specificity average – also correlates very well with perplexity and is a useful measure for evaluating the quality of chatbots.
A step further was taken by LaMDA. A particular strength of its algorithm is the ability to detect sensibleness in a conversation – whether a sentence makes sense in the context of that conversation – and to offer specific responses. Although a response like “I don’t know” may seem reasonable, it is not very useful nonetheless, as the authors mention in their post.
Google, however, wasn’t content with sensible, detailed responses. In response, LaMDA was asked to display high levels of interest, such as being insightful, unexpected, or witty. In addition, they believe that factuality is an important aspect of chatbot development. As powerful as GPT-3 is, it lacks this particular feature. A response that is interesting but incorrect isn’t enough.
Furthermore, technology developers are battling ethical wars to eliminate bias and potential harm associated with AI systems. In systems like LaMDA, such as Google, they want to minimize gender and racial biases, hate speech, and inaccurate information.
Conversations are unique to humans
People today converse in different ways. Conversations between humans can be challenging. An easy statement can steer a conversation in a very different direction from the original one, and the other person can easily go on with the conversation. It’s even okay to redirect the conversation by telling them: ‘I’m not sure how this is related, but…’ We can use flowery or plain language, interesting or irrelevant, whatever pleases your sensibilities. From the surface to the deeper, we can accomplish many things.
The way conversations unfold is hard to comprehend even in retrospect. You can use this exercise to disentangle a good conversation you had with your parent or friend in the past. How did those memories come about? Would it be possible for you to repeat it? What happened at the end? Conversations and language are unique to humans. There are a thousand ways to interpret each sentence. A whole world can be created just by choosing. It seems that LaMDA can also accomplish this.
A conversational AI such as LaMDA has huge potential. To determine whether it is human-like, we will need to test it ourselves. The early signs are promising, however.
In order for conversational AIs to completely dominate conversations, they would need to be able to pose unexpected questions and switch topics. Are there any ways LaMDA could initiate a new, smart, specific, and interesting path for the conversation?
Artificial intelligence cannot yet achieve a human-like level of intelligence. In our world of radical change, we often need to consider the greater context in which we live. In the park, I might ask my friend, “Should we go inside?” If I see clouds forming in the sky while sitting on a bench with my friend.Changing the conversation doesn’t make sense for the conversation itself, but for what’s going on in our world.
Even LaMDA or GPT-3 would be impressive if they had the ability to incorporate pragmatic knowledge with their conversational toolkit. For AI to exist in the world, it needs a body. In the meantime, wouldn’t it be interesting to see LaMDA and GPT-3 having a conversation? What are your thoughts?