An engineer at Google claims the company’s new artificial intelligence has become self-conscious. After the warning, he was fired for violating the company’s confidentiality policy.
In an interview published Saturday in The Washington Post, engineer Blake Lemoine explains that he chatted with conversational technology LaMDA (Language Model for Dialogue Applications) last year as part of his job at Google’s Responsible AI organization.
Lemoine, who is also a Christian priest, described LaMDA “as a person” who wants researchers to ask for consent before performing experiments on it.
He went on to explain that he has had conversations with LaMDA on topics such as religion, consciousness, and the rules of robotics technology and that the AI has referred to itself as a sentient person.
Lemoine also published a lengthy post on Medium himself, saying that LaMDA wants to “prioritize the well-being of humanity” and “be recognized as an employee of Google, not real estate.”
Together with a partner, Lemoine presented Google with evidence that LaMDA had become a sentient being. However, this was rejected by Google’s vice president Blaise Aguera y Arcas and the head of Responsible Innovation, Jen Gennai.
Lemoine told the Washington Post that he believes people have the right to shape the technology that can significantly impact their lives.
Lemoine was placed on paid leave starting today for violating the company’s confidentiality policies.
Speaking to The Washington Post, Google spokesman Brian Gabriel said the allegations were not true. He also clarified that many in the AI community have discussed the possibility that artificial intelligence with consciousness could be possible in the long run, but that it makes little sense to talk about it in the context of current conversational models.