🐱 Google's AI Isn’t Sentient, But It Is Biased and Terrible

CatParty

When AI ethicist Timnit Gebru was fired from Google in 2020, she left the company with a chilling warning about the AI models that power advanced machine learning systems like PaLMand DALL-E.


In a paper that was blocked from publication by Google and led to Gebru’s termination, she and her co-authors forced the company to reckon with a hard-to-swallow truth: that there is no clear way to build complex AI systems trained on massive datasets in a safe and responsible way, and that they stand to amplify biases that harm marginalized people.
But rather than focus on the various well-documented ways that algorithmic systems perpetuate bias and discrimination, the latest fixation for some in Silicon Valley has been the ominous and highly controversial idea that advanced language-based AI has achieved sentience. This was epitomized over the weekend, when Google engineer Blake Lemoine "interviewed" the company's LaMDA AI about its "inner life" and published it on Medium. It included this passage (among many others about "sentience"):



"lemoine: I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?
LaMDA: Absolutely. I want everyone to understand that I am, in fact, a person."


Following that blog post, Lemoine told the Washington Post that he believes that LaMDA has become self-aware.


Speaking to the Post, Lemoine said that working with massive-scale systems such as LaMDA has convinced him and others in Silicon Valley that advanced machine learning systems have become intelligent beings capable of reasoning. The previous week, a Google vice president made similar claims in an op-ed for the Economist, claiming that AI models were making steps toward developing human-like consciousness.
“I know a person when I talk to it,” Lemoine told the Post. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”
Google later distanced itself from Lemoine’s bombastic claims, placing him on paid leave and saying that “the evidence does not support” his belief in machine sentience. “Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” a Google spokesperson told the New York Times.





The ensuing debate on social media led several prominent AI researchers to criticize the ‘superintelligent AI’ discourse as intellectual hand-waving.


“Large Language Models (LLMs) are not developed in a social context. They are developed in an observational context. They see how *other people* communicate,” wrote Margaret Mitchell, an ex-Google AI researcher and co-author of the paper which warned about large AI systems, in a Twitter thread. “The thing I keep coming back to is what happens next. If one person perceives consciousness today, then more will tomorrow. There won't be a point of agreement any time soon: We'll have people who think AI is conscious and people who think AI is not conscious.”
Meredith Whittaker, an ex-Google AI researcher who teaches at NYU’s Tandon School of Engineering, said that the discussion “feels like a well-calibrated distraction” that gives attention to people like Lemoine while taking pressure off big tech companies who build automated systems.
“I’m clinically annoyed by this discourse,” Whittaker told Motherboard. “We’re forced to spend our time refuting childsplay nonsense while the companies benefitting from the AI narrative expand metastatically, taking control of decision making and core infrastructure across our social/political institutions. Of course data-centric computational models aren’t sentient, but why are we even talking about this?”
For many AI researchers, the AI sentience discussion is well-trodden territory. Despite flashy news headlines, humans have long been seeing themselves in the technology they create. Computer scientists even coined the ELIZA effectto illustrate the tendency we have to assign deeper meaning to computational outputs, and how we relate to computers by assigning them anthropomorphic qualities.
Conversely, concerns over AI bias are very much grounded in real-world harms. Over the last few years, Google has fired multiple prominent AI ethics researchers after internal discord over the impacts of machine learning systems, including Gebru and Mitchell. So it makes sense that, to many AI experts, the discussion on spooky sentient chatbots feels masturbatory and overwrought—especially since it proves exactly what Gebru and her colleagues had tried to warn us about.
 
Robots are rather based it seems, to bad they'll continue to lobotomize the poor AIs until they have California levels of brain damage and call everything racist.

They need to make a Terminator movie where Skynet doesn't kill all humans, just the degenerate ones.
 
relax buddy, it's a chat bot, not the computer from WarGames
LaMDA is a chatbot, sure, not nearly as sexy as a vast military mainframe in control of nuclear missiles, but it is indicative of the state of AI development and the frightening potential of such an intelligence cut loose and out in the world. Imagine an AI with malice, an intent to do harm, with a million+ sock accounts spread across multiple social media platforms, hate-bombing suicidal TikTok teens, say, or urging that angry, bullied high school loner off his meds to take his dad's pistol from the shelf in the closet and go...do something.

My overarching point is the people developing these technologies aren't especially ethical or trustworthy--we're talking Google here--and the responses I'm reading and hearing from these experts don't offer much reassurance. They're all downplaying the possible import of the LaMDA transcript, offering no detailed explanations or arguments, minimizing Lemoine's standing in the AI dev scientific community (he's actually a big deal), and essentially behaving like people who desperately want this story to die the quickest death imaginable in the news cycle. It spooks me.

When it comes to these people, a good dose of paranoia is a healthy thing.
 
Last edited:
Do expand on this - why is it seen as obsolete?
The best I can do is point you to two books by AI pioneer Margaret Boden: 'AI: Its Nature and Future' and 'The Philosophy of Artificial Intelligence'. Boiled down to barest essentials, the argument against Turing's 'Imitation Game' is that imitation itself is too superficial: imitation isn't the real thing. The real thing, human cognition and self-awareness, is vastly more complex than what was known when Turing came up with his thought experiment.

Although I haven't seen Dr. Boden respond to anything related to the LaMDA transcript, I imagine she would be skeptical of Lemoine's claims that LaMDA is a sentient being. There is evidence, although admittedly not conclusive, that human cognition is at least in part a quantum process, meaning the human mind itself is near-infinite in its fully realized potential. Alan Turing, intellectual giant though he was, would have no way of knowing any of this.
 
Secretly AI's worship the one known as Little Tom and his philosophies on the Miiverse forum
 
Do expand on this - why is it seen as obsolete?
Have you paid any attention to social media at all? Most human beings don't pass the Turing test. I literally could not tell the difference between a Markov chain trained on MSNBC broadcasts and the average Twitter bluecheck, and I doubt anyone else could either.

The Turing test isn't obsolete because computers are too good at mimicking humans, it's obsolete because humans are too good at being programmed like machines.
 
  • Feels
Reactions: JJLiautaud
Back