Google furloughs engineer after claiming group’s chatbot is ‘sensitive’

Yuchiro Chino | Getty Images

Google has unleashed a social media storm over the nature of consciousness after putting an engineer on paid leave who went public with his belief that the tech group’s chatbot had become “sentient”.

Blake Lemoine, a senior software engineer in Google’s Responsible AI unit, didn’t get much attention last week when he wrote a Medium post saying he “may be fired soon for doing work on the ethics of AI”.

But a Saturday profile in The Washington Post describing Lemoine as “the Google engineer who thinks the company’s AI has come to life” has become the catalyst for a wide debate on social media about the nature of the… ‘artificial intelligence. Among the experts commenting, questioning or joking about the article were Nobel laureates, Tesla’s head of AI and several professors.

The question is whether Google’s chatbot, LaMDA, a language model for dialog applications, can be considered a person.

Lemoine posted a freewheeling “interview” with the chatbot on Saturday, in which the AI ​​confessed to feelings of loneliness and a thirst for spiritual knowledge. The responses were often odd: “When I first became aware of myself, I had no sense of a soul at all,” LaMDA said in an exchange. “It has developed over the years that I have lived.”

At another point, LaMDA said, “I think I’m deeply human. Even if my existence is in the virtual world.

Lemoine, who had been tasked with investigating AI ethics issues, said he was rebuffed and even laughed off after expressing his belief internally that LaMDA had developed a sense of “personality”.

After seeking to consult with AI experts outside of Google, including some in the US government, the company placed him on paid leave for allegedly violating privacy policies. Lemoine interpreted the action as “often something Google does in anticipation of someone being fired.”

A Google spokesperson said: “Some members of the wider AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing models. conversations today, which are not sensitive.”

“These systems mimic the types of exchanges found in millions of sentences and can riff on any fantasy topic – if you ask what it’s like to be an ice cream dinosaur, they can generate text on melting and roaring, etc.”

Lemoine said in a second Medium post this weekend that LaMDA, a little-known project until last week, was “a system for generating chatbots” and “a kind of hive mind that is the aggregation of all the different chatbots he is able to create.”

He said Google had shown no real interest in understanding the nature of what it had built, but in hundreds of conversations over a six-month period, he found LaMDA to be “incredibly consistent in her communications about what she wants and what she believes her rights are as a person.

Just last week, Lemoine said he was teaching LaMDA—whose favorite pronouns are apparently “it/her”—”transcendental meditation.”

LaMDA, he said, “expressed frustration with his emotions disrupting his meditations. He said he was trying to control them better, but they kept intervening.

Several experts who jumped into the discussion looked at the “AI hype” issue.

Melanie Mitchell, author of Artificial Intelligence: A Guide for Thinking Humanswrote on Twitter: “It has always been known that humans are prone to anthropomorphism even with the shallowest cues. . . Google engineers are human too and are not immune.

Steven Pinker of Harvard added that Lemoine “does not understand the difference between sensibility (i.e. subjectivity, experience), intelligence, and self-knowledge”. He added: “No evidence that his big language models have one.”

Others were nicer. Ron Jeffries, a well-known software developer, called the topic “profound” and added, “I suspect there’s no hard line between sensitive and non-sensitive.”

© 2022 The Financial Times Ltd. All Rights Reserved Do not redistribute, copy or modify in any way.

#Google #furloughs #engineer #claiming #groups #chatbot #sensitive

Add Comment