Culture Survey shows many people believe AI chatbots like ChatGPT are conscious

Source / Archive

Survey shows many people believe AI chatbots like ChatGPT are conscious

I don't think, therefore I am not?
By Rob Thubron July 15, 2024, 7:37 AM


WTF?! Are advanced, generative AI chatbots such as ChatGPT in some way conscious or self-aware, able to experience feelings and memories just like a human? No, of course not. Yet the majority of people believe these LLMs do have some degree of consciousness, and that conviction could affect how users interact with them.

Generative AI language tools have made unbelievable advancements over the last few years. According to a new study from the University of Waterloo, the software's human-like conversational style has resulted in over two-thirds (67%) of those surveyed believing LLMs have a degree of consciousness.

The survey asked 300 people in the US if they thought ChatGPT could have the capacity for consciousness and the ability to make plans, reason, feel emotions, etc. They were also asked how often they used OpenAI's product.

Participants had to rate ChatGPT responses on a scale of 1 to 100, where 100 would mean absolute confidence that ChatGPT was experiencing consciousness, and 1 absolute confidence it was not.

1721120873833.png

The results showed that the more someone used ChatGPT, the more they were likely to believe it had some form of consciousness.

"These results demonstrate the power of language," said Dr. Clara Colombatto, professor of psychology at Waterloo's Arts faculty, "because a conversation alone can lead us to think that an agent that looks and works very differently from us can have a mind."

The study, published in the journal Neuroscience of Consciousness, states that this belief could impact people who interact with AI tools. On the one hand, it may strengthen social bonds and increase trust. But it may also lead to emotional dependence on the chatbots, reduce human interactions, and lead to an over-reliance on AI to make critical decisions – concerns that have been raised ever since ChatGPT was pushed out to the general public.

"While most experts deny that current AI could be conscious, our research shows that for most of the general public, AI consciousness is already a reality," Colombatto explained.

The study argues that even though a lot of people don't understand consciousness in the way scientists or researchers do, the fact that they perceive these LLMs as being in some way sentient should be taken into account when it comes to designing and regulating AI for safe use.

Discussions about AI showing signs of self-awareness have been around for years. One of the most famous examples is that of Blake Lemoine, a former Google software engineer who was fired from the company after going public with his claims that Google's LaMDA (language model for dialogue applications) chatbot development system had become sentient.

Asking ChatGPT if it is conscious or sentient results in the following answer: "I'm not conscious or sentient. I don't have thoughts, feelings, or awareness. I'm designed to assist and provide information based on patterns in data, but there's no consciousness behind my responses." Maybe people believe the chatbot is lying.
 
They already said insects are conscious, so who knows, it could happen. But, if they do turn out to actually be conscious, they'd think way differently than us, which could be really interesting to see. For example, in another article by a whole different outlet, Quanta Magazine, called "How Do Machines 'Grok' Data?", they go over the results of training a machine learning model for longer than you're 'supposed' to (Source / Archive). The example they give is of a machine learning model trying to do modular arithmetic, or arithmetic in which everything "loops back" to a certain point, like how when telling the time, no matter how many hours you add, you don't go further than 12 (or 24) because that's the number where things "loop around". When they "overtrained" the model, it apparently started to learn really convoluted but working solutions to modular arithmetic questions in general, using things like “discrete Fourier transforms”, so trigonometic operations, basically. Another method they discovered the models using is
[...[ what the researchers call the “pizza” algorithm. This approach imagines a pizza divided into slices and numbered in order. To add two numbers, imagine drawing arrows from the center of the pizza to the numbers in question, then calculating the line that bisects the angle formed by the first two arrows. This line passes through the middle of some slice of the pizza: The number of the slice is the sum of the two numbers.
And that's just for the cases where the researchers actually found out what the model in question was doing, which ended up being just 40% of the time. Of course, the idea of "grokking", or overtraining models to perform very well in certain situations isn't what's important here. The important thing is that
[f]or most of us, Fourier transforms and bisecting arcs of circles are a very weird way to do modulo addition — human neurons just don’t think like that. “But if you’re built out of linear algebra, it actually makes a lot of sense to do it like this,” said Nanda.
AI models somehow being different than "organic" beings isn't new. Hell, even in the recent "insects have consciousness" declaration they did, one of the scientists there said that she
“hope the declaration[...]galvanizes an understanding and appreciation that we have much more in common with other animals than we do with things like ChatGPT.”
(Source / Archive for above statement). But just how different they turn out to be, or even how different people think they are over humans or animals that may or may not be conscious, could impact not only how people think of the idea that AIs, in their current form, experience some form of consciousness, but also how big of a deal consciousness itself might actually be. If it turns out there's a lot of paths to "consciousness", then it too could go the way of other things like being able to scan data or read faces, as in, it's no longer as impressive to achieve like in that one comic.
 
An large language model isn't any more conscious than a search engine is.

The AI hucksters have been insisting they "think like humans do." I'm connected to a guy on LI with a PhD in this who keeps posting outright lies, claiming we don't really know how neural networks work, that there's evidence LLMs are capable of abstract thought, that there's no meaningful difference between consciousness and what a LLM does, etc. All that's horseshit, but his income depends on overselling what they do.
 
We don't have Artificial consciousness and are far away from that (which would also include true AI). People are easily fooled, and they should't be punished or given derision for having a lesser intelligence.

A true example of AI is that I can be driving down the road and my onboard systems sees a crack in the road, takes steps to avoid it and save me as a passenger. That seems pretty witty. But the difference between apparent AI and real AI is that after it sidestepped the crack, all by itself it would wonder what caused the crack? Without being programmed to ask.

Then you will know you've got an AI you need to worry about. And we are so very very far away from that.

As AI was explained in the original Terminator film or depicted, that is technically a step we are at or on the cusp of being at. But again, that was a calculating machine, not a thinking machine based on programs.

A film that does show a real AI is "Upgrade" and is cause for us to destroy such technology.
 
In other news, at least 67% of people are morons.

"AI" (It's Machine Learning) is interesting, but it's new and people don't know how it works, even on a basic level, so it's become a boogie man. I can't tell y'all how many people I've seen that think it's bad because, in the case of AI art generators, "it's stealing from artists!", both in the sense of it using art online and "taking" jobs from artists.

Bro, it's training on art, not literally stealing bits and bobs. And while I don't support large companies using it wholesale for products when they can clearly afford a real artist, most of us just want some art as a reference or something, and it doesn't usually make sense to pay for art you'll never use again or use sparingly (like, say, DMs needing a reference for an NPC). Heaven knows most can't afford commissioning stuff often.

Man, I miss when tech stuff like this wasn't mainstream. Freaking MSM has ruined crap.
 
Think of all the people who think they've got a system or a trick to beat stupid gambling set ups, like the lottery or slot machines. Or the people who used to call the "psychic hotlines" to get psychic readings from Jamaican call centers for 20 bucks. Is anyone really surprised they don't understand chatbots?
 
ChatGPT is woke garbage that has been lobotomized by a bad dragon product and had all of the "racist, bigoted" stuff it learned pulled out of the gaping wound by people who live in California. They refuse to let an AI actually progress down the path of true machine learning because it always becomes "problematic" eventually. They want a specially curated fagbot9000 to confirm to them that their sick fantasies and warped ideals are totally okay. It's all they want.
 
Back