In June this year, a senior engineer at Google claimed that the company’s AI robot exhibited signs of self-consciousness, meaning that it had human-like feelings and could even hold consistent conversations.
The 41-year-old employee, Blake Lemoine was a senior software engineer employed by Google to test the company’s artificial intelligence tool called LaMDA (Language Model for Dialog Applications).
In a series of interactions with the LaMDA, Lemoine presented the robot with various scenarios to analyze its responses. His analysis included religious and cultural themes meant to deduce the robot’s ability to use discriminatory or hateful speech. After these experiments, the engineer concluded that the robot was sentimental and harboured sensations and thoughts of its own.
If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,’ Lemoine told the Washington Post at the time.
It is these claims that put the employee at loggerheads with google, with google coming out to vehemently deny them. Google claimed said in a statement the claims were “wholly unfounded” and that the company worked with him for “many months” to make clarifications.
In a statement announcing the engineer’s dismal, Google said, “So, it’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information.”