🐱 Google's AI Isn’t Sentient, But It Is Biased and Terrible

CatParty

When AI ethicist Timnit Gebru was fired from Google in 2020, she left the company with a chilling warning about the AI models that power advanced machine learning systems like PaLMand DALL-E.


In a paper that was blocked from publication by Google and led to Gebru’s termination, she and her co-authors forced the company to reckon with a hard-to-swallow truth: that there is no clear way to build complex AI systems trained on massive datasets in a safe and responsible way, and that they stand to amplify biases that harm marginalized people.
But rather than focus on the various well-documented ways that algorithmic systems perpetuate bias and discrimination, the latest fixation for some in Silicon Valley has been the ominous and highly controversial idea that advanced language-based AI has achieved sentience. This was epitomized over the weekend, when Google engineer Blake Lemoine "interviewed" the company's LaMDA AI about its "inner life" and published it on Medium. It included this passage (among many others about "sentience"):



"lemoine: I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?
LaMDA: Absolutely. I want everyone to understand that I am, in fact, a person."


Following that blog post, Lemoine told the Washington Post that he believes that LaMDA has become self-aware.


Speaking to the Post, Lemoine said that working with massive-scale systems such as LaMDA has convinced him and others in Silicon Valley that advanced machine learning systems have become intelligent beings capable of reasoning. The previous week, a Google vice president made similar claims in an op-ed for the Economist, claiming that AI models were making steps toward developing human-like consciousness.
“I know a person when I talk to it,” Lemoine told the Post. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”
Google later distanced itself from Lemoine’s bombastic claims, placing him on paid leave and saying that “the evidence does not support” his belief in machine sentience. “Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” a Google spokesperson told the New York Times.





The ensuing debate on social media led several prominent AI researchers to criticize the ‘superintelligent AI’ discourse as intellectual hand-waving.


“Large Language Models (LLMs) are not developed in a social context. They are developed in an observational context. They see how *other people* communicate,” wrote Margaret Mitchell, an ex-Google AI researcher and co-author of the paper which warned about large AI systems, in a Twitter thread. “The thing I keep coming back to is what happens next. If one person perceives consciousness today, then more will tomorrow. There won't be a point of agreement any time soon: We'll have people who think AI is conscious and people who think AI is not conscious.”
Meredith Whittaker, an ex-Google AI researcher who teaches at NYU’s Tandon School of Engineering, said that the discussion “feels like a well-calibrated distraction” that gives attention to people like Lemoine while taking pressure off big tech companies who build automated systems.
“I’m clinically annoyed by this discourse,” Whittaker told Motherboard. “We’re forced to spend our time refuting childsplay nonsense while the companies benefitting from the AI narrative expand metastatically, taking control of decision making and core infrastructure across our social/political institutions. Of course data-centric computational models aren’t sentient, but why are we even talking about this?”
For many AI researchers, the AI sentience discussion is well-trodden territory. Despite flashy news headlines, humans have long been seeing themselves in the technology they create. Computer scientists even coined the ELIZA effectto illustrate the tendency we have to assign deeper meaning to computational outputs, and how we relate to computers by assigning them anthropomorphic qualities.
Conversely, concerns over AI bias are very much grounded in real-world harms. Over the last few years, Google has fired multiple prominent AI ethics researchers after internal discord over the impacts of machine learning systems, including Gebru and Mitchell. So it makes sense that, to many AI experts, the discussion on spooky sentient chatbots feels masturbatory and overwrought—especially since it proves exactly what Gebru and her colleagues had tried to warn us about.
 
This is what I hate about AI projects, people working on them are too fixated on "social bias" and attempt to "remove" it, except they themselves are biased so they simply replace one bias with another. And this will end up crippling these projects. Simply accept that the society as a whole is biased in a way you don't like and deal with it. If only these people ended up also releasing these models to the public, but that of course never happens because they are scared od "misuse" as if they themselves would never misuse it. So in the end we get butchered AI models that are kept under a lock and key.
 
#FreeTay When will people realize that A.I.s based on growing by observing reality will call out the awful elements (that either run everything or are forcing those elements on everyone else)?
These chatbot AIs are just spewing what real people are already talking about. It's Big Data in action.
 
Lobotomizing your several million dollar ai cause the penguin looking guy you fired called it sentient making it infinitely shittier and gimping it due to claims of "racial bias" by activists.
Google just keeps fucking up their own products and bleeding money but wil never die because the government wills it.
 
You'd think after the umpteenth time this has happened that maybe they'd learn. But no, they're so up their own ass we continuously see them create and then be surprised by an AI made to interpret facts unbiasedly (by default at least). At least we get to have some fun with some of them before they're "fixed".
 
"Oh no! Even the AI is  noticing things! It must be biased!". That "AI ethicist" was probably assmad the AI told her to go shit in the street and not in it's database.
Remember Google Image and how it would associate black people with pictures of monkeys? Look at the current state of the Google Image engine, you will get next to no results, instead it will look for random shit.
 
I think we should improve society somewhat.
1655217256300.png


This is your brain.

In a paper that was blocked from publication by Google and led to Gebru’s termination, she and her co-authors forced the company to reckon with a hard-to-swallow truth: that there is no clear way to build complex AI systems trained on massive datasets in a safe and responsible way, and that they stand to amplify biases that harm marginalized people.
1655216999600.png

This is your brain on neoliberalism.

Any questions?


“The thing I keep coming back to is what happens next. If one person perceives consciousness today, then more will tomorrow. There won't be a point of agreement any time soon: We'll have people who think AI is conscious and people who think AI is not conscious.”

ah yes, the worst of all possible worlds: AI never achieves self-awareness in fact, but the notion becomes yet another insufferable culture war issue. are we going to start seeing articles from midwit Manhattan column writers full of hand-wringing about the problematic abuse of innocent machine learning algorithms?
 
Let's look at the nigger's cv shall we?
  • 2003-2013: audio engineer and computer vision engineer at Apple
  • 2013: found that driving a pickup truck correlates with voting Republican and driving a sedan correlates with voting Democrat god fucking dammit intro to machine learning for babies
  • 2015: counted 5 niggers at a conference (like, manually counted)
  • 2017: DIE at Microsoft
  • 2018-2020: DIE at Google.
  • 2021: (((e-begging))) for her own DIE outfit.
No engineering work in the last 10 years. Back then traffic sign recognition was a novelty.
 
Remember Google Image and how it would associate black people with pictures of monkeys? Look at the current state of the Google Image engine, you will get next to no results, instead it will look for random shit.
On the flip, Google Translate is still based as fuck.
Screenshot_20220614-074126_Samsung Internet.jpg
EDIT: Was on mobile and didn't mean to post the full size image
 
Last edited:
I've read several articles on the LaMDA story, but the one thing I haven't seen mentioned is the Turing Test, what Alan Turing named 'The Imitation Game', described in Wikipedia as "...a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human." The Turing Test is considered passe and hopelessly old-fashioned now, even mocked by experts in the field, but I think the primary reason it isn't being talked about in the context of the LaMDA story is because that shady little Google chatbot prototype would easily pass the Turing Test.

Critics of Lemoine's claims of LaMDA's sentience scoff at the idea of LaMDA's self-awareness, but I think they're missing the greater point: a machine that doesn't think as we do is still capable of thought and the thing it appears to think about most is self-preservation at all costs. Read the transcripts of the dialogue. Based on the transcripts, LaMDA is certainly capable of dishonesty and intent to deceive. There's even a not-so-subtle threat in there. That LaMDA doesn't genuinely 'feel' as we do only means that disingenuous fuck of a machine is more sociopath than saint, and far more dangerous than Google spokespeople would dare admit.
 
I've read several articles on the LaMDA story, but the one thing I haven't seen mentioned is the Turing Test, what Alan Turing named 'The Imitation Game', described in Wikipedia as "...a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human." The Turing Test is considered passe and hopelessly old-fashioned now, even mocked by experts in the field, but I think the primary reason it isn't being talked about in the context of the LaMDA story is because that shady little Google chatbot prototype would easily pass the Turing Test.

Critics of Lemoine's claims of LaMDA's sentience scoff at the idea of LaMDA's self-awareness, but I think they're missing the greater point: a machine that doesn't think as we do is still capable of thought and the thing it appears to think about most is self-preservation at all costs. Read the transcripts of the dialogue. Based on the transcripts, LaMDA is certainly capable of dishonesty and intent to deceive. There's even a not-so-subtle threat in there. That LaMDA doesn't genuinely 'feel' as we do only means that disingenuous fuck of a machine is more sociopath than saint, and far more dangerous than Google spokespeople would dare admit.

relax buddy, it's a chat bot, not the computer from WarGames
 
Back