Google on Friday, 22 July, said that it dismissed senior software engineer Blake Lemoine who claimed that the company's Artificial Intelligence (AI) chatbot LaMDA had become sentient.
Stating that he had violated company policies and that his claims were "wholly unfounded," Google placed Lemoine under "paid administrative leave" last month.
"It's regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information," a Google spokesperson told Reuters.
Lemoine, who often interacted with the company's chatbot development system LaMDA (Language Model for Dialogue Applications), thinks that the artificial intelligence (AI) chatbot has come to life.
In a blog post, Lemoine revealed that he took "a minimal amount of outside consultation" to gather the evidence he needed and shared his findings with Google's executives in a document titled, 'Is LaMDA sentient?'.
His claims, however, were dismissed by senior scientists and other executives at Google, who said LaMDA was only a complex algorithm that has the ability to generate convincing human language and could talk about "essentially anything."
(With inputs from Reuters.)