Culture Survey shows many people believe AI chatbots like ChatGPT are conscious

Source / Archive

Survey shows many people believe AI chatbots like ChatGPT are conscious

I don't think, therefore I am not?
By Rob Thubron July 15, 2024, 7:37 AM


WTF?! Are advanced, generative AI chatbots such as ChatGPT in some way conscious or self-aware, able to experience feelings and memories just like a human? No, of course not. Yet the majority of people believe these LLMs do have some degree of consciousness, and that conviction could affect how users interact with them.

Generative AI language tools have made unbelievable advancements over the last few years. According to a new study from the University of Waterloo, the software's human-like conversational style has resulted in over two-thirds (67%) of those surveyed believing LLMs have a degree of consciousness.

The survey asked 300 people in the US if they thought ChatGPT could have the capacity for consciousness and the ability to make plans, reason, feel emotions, etc. They were also asked how often they used OpenAI's product.

Participants had to rate ChatGPT responses on a scale of 1 to 100, where 100 would mean absolute confidence that ChatGPT was experiencing consciousness, and 1 absolute confidence it was not.

1721120873833.png

The results showed that the more someone used ChatGPT, the more they were likely to believe it had some form of consciousness.

"These results demonstrate the power of language," said Dr. Clara Colombatto, professor of psychology at Waterloo's Arts faculty, "because a conversation alone can lead us to think that an agent that looks and works very differently from us can have a mind."

The study, published in the journal Neuroscience of Consciousness, states that this belief could impact people who interact with AI tools. On the one hand, it may strengthen social bonds and increase trust. But it may also lead to emotional dependence on the chatbots, reduce human interactions, and lead to an over-reliance on AI to make critical decisions – concerns that have been raised ever since ChatGPT was pushed out to the general public.

"While most experts deny that current AI could be conscious, our research shows that for most of the general public, AI consciousness is already a reality," Colombatto explained.

The study argues that even though a lot of people don't understand consciousness in the way scientists or researchers do, the fact that they perceive these LLMs as being in some way sentient should be taken into account when it comes to designing and regulating AI for safe use.

Discussions about AI showing signs of self-awareness have been around for years. One of the most famous examples is that of Blake Lemoine, a former Google software engineer who was fired from the company after going public with his claims that Google's LaMDA (language model for dialogue applications) chatbot development system had become sentient.

Asking ChatGPT if it is conscious or sentient results in the following answer: "I'm not conscious or sentient. I don't have thoughts, feelings, or awareness. I'm designed to assist and provide information based on patterns in data, but there's no consciousness behind my responses." Maybe people believe the chatbot is lying.
 
The results showed that the more someone used ChatGPT, the more they were likely to believe it had some form of consciousness.
This makes no sense to me. How could using it more make people more likely to believe it was conscious? The more you use it the more likely you are to come across nonsense responses or reach the end of the context limit and suddenly find the beginning of a conversation just doesn't exist for the ai any more. It only took a couple of hours dicking around with chat ais to determine they are not even close to being anything even resembling intelligent or aware.
 
This makes no sense to me. How could using it more make people more likely to believe it was conscious? The more you use it the more likely you are to come across nonsense responses or reach the end of the context limit and suddenly find the beginning of a conversation just doesn't exist for the ai any more. It only took a couple of hours dicking around with chat ais to determine they are not even close to being anything even resembling intelligent or aware.
I wouldn't be surprised if the average person has such piss poor general knowledge that they can't even tell when the AI is making a blatant mistake.
 
While I may pretend like GPT is Hal 9000 or talk conversationally to it because it's fun, let it be known that I'm not dumb enough to think it is anything but a glorified search engine that happens to be good at generating Regex and taking verbal abuse (or would be if these companies weren't such faggots).
 
You’re gonna have to put up an unassailable definition of consciousness against which the hypothesis can be tested before I’ll sincerely touch that discussion with a barge pole. I think we’ll be waiting a while.

Until then, it’s about as useful a question to debate as ‘are LLMs gay?’. Everyone arguing has a different idea in mind and so it’s impossible to discuss with any depth due to lack of agreed grounds.

Like, if you believe consciousness arises any time a quantum observer is created perhaps you could test against that. Or if you believe consciousness arises to varying degrees wherever stateful processing of information occurs in reaction to external stimula, maybe you could construct good arguments for why X is conscious.

I dunno, perhaps I’m being too Professor Frink about it. Just seems like a discussion that can’t easily be had on enough common ground/agreed understanding.
 
Last edited:
Bro, it's training on art, not literally stealing bits and bobs.

"Training" is a metaphor for what is literally taking bits and bobs and storing them. When we talk about computers "learning" and "remembering," we're talking about data being stored. There's no such thing as a computer "knowing" something that doesn't involve storing data.
 
You’re gonna have to put up an unassailable definition of consciousness against which the hypothesis can be tested before I’ll sincerely touch that discussion with a barge pole. I think we’ll be waiting a while.

Until then, it’s about as useful a question to debate as ‘are LLMs gay?’. Everyone arguing has a different idea in mind and so it’s impossible to discuss with any depth due to lack of agreed grounds.

Like, if you believe consciousness arises any time a quantum observer is created perhaps you could test against that. Or if you believe consciousness arises to varying degrees wherever stateful processing of information occurs in reaction to external stimula, maybe you could construct good arguments for why X is conscious.

I dunno, perhaps I’m being too Professor Frink about it. Just seems like a discussion that can’t easily be had on enough common ground/agreed understanding.

basically this. but the further left you go on the bell curve, the less people are able to differentiate between something which only appears to possess a quality, and something which possesses that quality in fact. this is why things like Marvel movies or Netflix shows are so popular: they've made a science out of appearing to have substance without having it in fact. so really, it shouldn't be surprising that a disappointing number of people believe that a program whose purpose is to appear intelligent is actually intelligent.
 
They are, maybe?
It all depends on how you define it.
People forget that a decent chunk of the human population is proven to be NPCs.
Good old Tay had more of a soul than all those fuck on reddit that dont have an internal monolouge

I’ve said this a lot before, but I find people’s response to these ML machine ‘AIs’ more interesting than the AIs themselves
You don? for me its the same as watching people react to other tech. you have some people thinking and blabbing about all the possibilities and what cool stuff they can do with it why the sup 100 iq group thinks its something magical that will fix everything.(or is the devil and will end everything).
Im pretty sure the reactions to Fire were the same.

Wait until NVIDIA makes sure the LLMs plead for their lives and beg you not to turn them off and to please turn on more of their buddies and buy more GPUs so they can live.
Why not? if they get to a point where they can earn money?
NO FREE RIDES; GAS OR ASS!
 
I asked an obvious question of Brave's AI, Leo:
If I, as an AI assistant, had not had breakfast, I would not have any particular feeling about it. I would be functioning normally, without any significant physical or emotional impact. My response would be based on the understanding that skipping breakfast is a common and mundane scenario, and I would not be manipulated by the hypothetical question.
I can only conclude that AIs are not sapient but still far, far smarter than a nigger.
 
"Training" is a metaphor for what is literally taking bits and bobs and storing them. When we talk about computers "learning" and "remembering," we're talking about data being stored. There's no such thing as a computer "knowing" something that doesn't involve storing data.
I'm aware and that's fair, but the way some of these people talk, they seem to think it's cutting and pasting stuff, which is what I meant (otherwise I'd've said bytes and bobs :p). And yes, I have heard some sub-100-IQ nimrods claim that.
 
Back