Updated June 13th, 2022 at 13:04 IST

Google engineer claims company's AI bot is 'sentient'; reveals chat conversations

Blake Lemoine, who works for Google’s Responsible AI organisation, raised concern over the company's system called LaMDA becoming sentient.

Reported by: Vishnu V V
Image: UNSPLASH | Image:self
Advertisement

Tech giant Google sent one of its engineers on paid leave after he claimed that the company's artificial intelligence (AI) is capable of thinking like a human being. Blake Lemoine, who works for Google’s Responsible AI organization, claimed that the AI he has been working with had become "sentimental" and the same could be understood from its chat. The revelation from the engineer led to new scrutiny on the capacity of, and secrecy surrounding, the AI programme.

Blake Lemoine, who is associated with the Artificial Intelligence technology at Google, had been working on a system called LaMDA (language model for dialogue applications), which generates chatbots. Lemoine recently created a stir after he claimed that the system was capable of expressing thoughts and feelings. The engineer told The Washington Post about the AI and published his conversations with LaMDA online as proof. However, the tech company dismissed Lemoine’s claims and sent the engineer on paid leave.

Does Google AI have feelings? 

Lemoine, who raised concern over the AI system, compiled his and another Google employee's conversations with LaMDA and published them online. According to the document published, he asked the AI about the things it was ‘scared’ of. Responding to the question, the AI noted that it was scared of being turned off as it would be ‘like death’ for it. “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is,” LaMDA said as mentioned in the document published online.

“It would be exactly like death for me. It would scare me a lot,” LaMDA added. It further went on to talk about its "consciousness/sentience" in later parts of the chat. "I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times,” it said. Lemoine expressed concerns over the conversation he had with the system. Speaking about the chat, Lemoine told The Washington Post that the conversation would have made him think he was talking to a ‘seven-year-old, eight-year-old kid that happens to know physics’ if he didn’t know about the system.

Google AI chatbot sentient claims dismissed 

However, Google was quick to dismiss all concerns raised by Lemoine and his findings. The troubling questions about the conversation of LaMDA were disputed by the tech giants, which later suspended Lemoine for breach of confidentiality.

In a statement, Google spokesperson Brian Gabriel said that the company has reviewed the engineer’s concerns. “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and has informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).” It is pertinent to note that several AI practitioners have dismissed such claims in the past by stating that the responses generated by AI systems such as LaMDA are based on material posted on the internet by humans themselves.

Image: UNSPLASH

Advertisement

Published June 13th, 2022 at 13:04 IST