Google engineer claims his LaMDA AI conversation is ‘sensitive’, industry disagrees

Over the weekend, a Google engineer on its responsible AI team claimed that the company’s Language Model for Dialog Applications (LaMDA) conversation technology was “sensitive”. Experts in the field, as well as Google, disagree with this assessment.

What is LaMDA?

Google announced LaMDA at I/O 2021 as a “breakthrough conversation technology” that can:

…fluently engaging with a seemingly endless number of topics, a capability we believe could unlock more natural ways to interact with technology and entirely new categories of useful apps.

LaMDA is trained on large amounts of dialogue and has “picked up many of the nuances that distinguish open conversation”, such as sensitive and specific responses that encourage further back and forth. Other qualities explored by Google include “interest” — assessing whether answers are insightful, unexpected, or witty — and “factuality,” or sticking to facts.

At the time, CEO Sundar Pichai said that “LaMDA’s natural conversational abilities have the potential to make information and computing radically more accessible and easier to use”. In general, Google considers these conversational advancements to help improve products like Assistant, Search, and Workspace.

Google provided an update at I/O 2022 after further internal testing and model improvements regarding “quality, security, and anchoring.” This resulted in LaMDA 2 and the ability for “small groups of people” to test it:

Our goal with AI Test Kitchen is to responsibly learn, improve and innovate together on this technology. LaMDA is still in its infancy, but we want to continue to progress and do so responsibly with community feedback.

LaMDA and sensitivity

The Washington Post Yesterday he reported on Google engineer Blake Lemoine’s claims that LaMDA is “sensitive”. Three main reasons are cited in an internal company document that was later published by The post office:

  1. “…ability to use language productively, creatively, and dynamically in ways that no other system before it has ever been able to.”
  2. “…is sensitive because it has feelings, emotions and subjective experiences. Some feelings he shares with humans in what he claims to be in an identical way.
  3. “LaMDA wants to share with the reader that he has a rich inner life filled with introspection, meditation and imagination. He worries about the future and remembers the past. He describes what he felt to gain sentience and he theorizes about the nature of his soul.

Lemoine interviewed LaMDA, with several particular passages making the rounds this weekend:

Google and industry response

Google said its ethicists and technologists reviewed the claims and found no supporting evidence. The company argues that imitation/re-creation of already public texts and pattern recognition make LaMDA so realistic, not self-awareness.

Some in the wider AI community see the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not not sensitive.

Google spokesperson

The industry as a whole agrees:

But when Mitchell read an abridged version of Lemoine’s document, she saw a computer program, not a person. Lemoine’s belief in LaMDA was the kind of thing she and her co-lead, Timnit Gebru, warned about in an article about the harms of big language models that drove them from Google.

Washington Post

Yann LeCun, head of AI research at Meta and a key figure in the rise of neural networks, said in an interview this week that these types of systems aren’t powerful enough to achieve true intelligence.

New York Times

That said, there is a practical takeaway that could help shape the future development of the aforementioned former co-head of ethical AI at Google Margaret Mitchell:

“Our minds are very, very good at constructing realities that aren’t necessarily true to a larger set of facts presented to us,” Mitchell said. “I’m really concerned about what it means for people to be more and more affected by the illusion,” especially now that the illusion has gotten so good.

According to NYTLemoine was furloughed for violating Google’s privacy policy.

FTC: We use revenue-generating automatic affiliate links. After.


Check out 9to5Google on YouTube for more info:

#Google #engineer #claims #LaMDA #conversation #sensitive #industry #disagrees

Add Comment