An engineer from Google, Blake Lemoine stated that LaMDA, an artificially intelligent chatbot generator, would like to be acknowledged as a worker of Google instead of being just its property.
Google placed an employee on leave after he informed his superiors an Artificial Intelligence software he was working on had become sentient. However, the company says it’s because Lemoine violated its confidentiality policy.
LaMDA talked to Lemoine about the ideas of ‘religion’ and ‘personhood’, according to a statement sent to The Washington Post.
Lemoine had numerous shocking “talks” with LaMDA. Google, on the other hand, resists LaMDA’s communication about its needs and rights as a person.
Google spokesperson Brian Gabriel said that the team, including ethicists and engineers, assessed Blake’s concerns under Google’s AI Principles and told him the data does not really support his assertions. He was informed that there is no proof LaMDA is sentient.
See also: How to identify deepfake videos and images
Many other people involved in the tech world believe that sentient computers are approaching, if not yet proven to exist in the current time.
Aguera y Arcas suggested AI is approaching awareness in an Economist column. During negotiations with LaMDA, he felt the ground move and thought he was conversing with something smart.
However, critics argue AI is just a well-trained mimic and patterns recognition system coping with humans who seek connection.
Interested in learning more about it? Blake Lemoine wrote an article sharing more information about the story here.