Machines and consciousness

Well if embodied consciousnesses is merely the result of having sufficiently dense neural activity in terms of electrical signals being sent over the neural network, then an equivalent density of electrical signals being sent in a similar manner over a different network could in theory also be conscious.
Yeah, that "if" is the million dollar question (in this thread at least).
Personally I see consciousness as being an illusory concept to begin with, it's just a vague term used to refer to some unified sum of a bunch of processes.
You can think about it as experiencing qualia, for this example.

What you are "feeling" right now, as opposed to a sophisticated puppet that moves, acts and talks like humans, but doesn't feel anything (it's a giant goldberg machine like your body, but stops there. It doesn't feel "red", it only registers the information, while you also do that, but have a subjective experience as well. This "illusion" you're experiencing, that should not in theory be by a puppet (I'm not going to go into those who claim that the universe and all that's in it is conscious at different degrees)), can a computer with current hardware achieve this subjective experience? This illusion, and not just simply (metaphorically) gears moving.
View attachment 4799084
Yeah, this fucking dork thought Googles Lambda was sentient.
You're spot on, I think they were talking about Google Lamda, at least amongst others.
 
  • Agree
Reactions: Smug
AI will probably be accidentally discovered and then we will understand it. You don’t have to understand something to discover it. But after discovery , understanding often follows.
The problem with this line of thinking is that, even if you did "accidentally discover" a way to produce consciousness, you would have no way of knowing. You can't even rigorously prove anything other than yourself has consciousness. The only reason I know I have it is because I'm currently experiencing it. And from there I can dig into all kinds of sophistry about whether I was conscious 2 seconds ago and whether I will be 2 seconds from now.
It's not even simply that we have unanswered questions. We're so lost on the topic that we don't even know the right questions to ask.
 
  • Like
Reactions: Lowlife Adventures
What you are "feeling" right now, as opposed to a sophisticated puppet that moves, acts and talks like humans, but doesn't feel anything (it's a giant goldberg machine like your body, but stops there. It doesn't feel "red", it only registers the information, while you also do that, but have a subjective experience as well. This "illusion" you're experiencing, that should not in theory be by a puppet (I'm not going to go into those who claim that the universe and all that's in it is conscious at different degrees)), can a computer with current hardware achieve this subjective experience? This illusion, and not just simply (metaphorically) gears moving.
Well, if we have something act in every single way like a conscious being then what's the point of distinguishing it? Let's say we have gun for example. If it shoots like a gun, has the outline of a gun and we can use it like a gun then what are we bothering with trying to determine if it's a gun?
 
The problem with this line of thinking is that, even if you did "accidentally discover" a way to produce consciousness, you would have no way of knowing. You can't even rigorously prove anything other than yourself has consciousness. The only reason I know I have it is because I'm currently experiencing it. And from there I can dig into all kinds of sophistry about whether I was conscious 2 seconds ago and whether I will be 2 seconds from now.
It's not even simply that we have unanswered questions. We're so lost on the topic that we don't even know the right questions to ask.
We can get into the nuts and bolts on a philosophical level but bottom
Line is you know if the dickhead in front of you in conscious or not and you’ll know it when you see it.
 
The human brain is exceptional in its learning ability, also it has other sensory information(smell, touch etc) to construct a abstract "idea" of an object. Look at how stable diffusion needs a few dozen gigabytes of trained data(raw data probably weighs much more) to generate a half-decent picture, that's probably more information than a human processes in his lifetime(for comparison your dozen+ years of schoolwork can be fit into a CD, yet you can construct so much extra information from it). Unless AI people somehow figure out a way for AI to "deflate" information effectively even with small training data I don't think AI will ever be conscious in one way or another.
 
I think something like the Frontier supercomputer could in theory run a close to optimal AGI program, though very slowly due to interconnect issues. Obviously all the arguments against (P-Zombie, Chinese Room, not embodied) would apply, I doubt such an AGI program would be made though, even with a good theory it would take a massive effort while large language models are getting investments now.
 
the turing test is vulnerable to philosophical zombies but the mind is a biochemical computer that scientists could replicate eventually
 
If that's the guiding principle here, then any machine which passes the Turing test can be said to be "conscious," right?
Not really.

It is with sufficient computing power to have a computer guess a response to fool a person and possibly study FB and the internet and store every possible response to every possible questions much a like a chess computer. But that is not proof of AI, although it is heralded as the test.

The primary problem with computers is that they are always right. As an example a computer will always tell you that 6x7 is 42, even a primitive one. If a computer knows the air pressure is 32, they will state such.

Instead let us look at a human brain and see how it operates. It guesses and assumes. A Childs brain has sufficient computing power but it takes years to develop language and the ability to do simple mathematics.
A low IQ person with a conscious can still not get 6x7 right but no one would say they are not a conscious human being. And without any change to that Low IQs conciousness, we are probably able to teach it that particular equation to get it right.

So it isn't really about computing power, is it?

The tests required to understand consciousness will have much to do with the mechanics of the environment, hormones and the body and all their inputs into the brain creating random fluctuations into the "brain" creating true randomity and uniqueness.

As an example, the body you have, its inputs, the environment could be throwing the equivalent of a trillion unique combinations at the brain every few seconds, every second for years that are unique to your brain's experience thus creating a unique person.

They are consciously trying to have a computer have OUTPUTS that show AI rather than inputting INPUTS that are random creating a unique being altering its computations.

Having a computer study language and peoples responses is proof of AI failure. With true AI, you would not need to input ANY language or outputs, it would chose and decide to do so.

That's why the Turing test is a false positive, because you GAVE the computer the ability to even speak words in the first place and programmed that in.
 
Let me preface this by saying I don't believe passing the Turing test = conscious. I was more or less pointing out the absurdity of the "you know it when you see it" benchmark.
The primary problem with computers is that they are always right. As an example a computer will always tell you that 6x7 is 42, even a primitive one. If a computer knows the air pressure is 32, they will state such.
I might be nitpicking, but computers are fully capable of making mathematical errors in certain edge cases due to quirks with the floating point numbering system. This is, of course, not really what you mean as those errors are predictable and mechanistic. But it is relevant to the next point.
Instead let us look at a human brain and see how it operates. It guesses and assumes.
We don't have much reason to assume the brain, itself, doesn't operate inherently mechanistically. Neurons fire, this strengthens connections with other neurons that fired at the same time (the resistance across synapses actually reduces at a physical level every time a signal is passed). Networks and associations are formed. Human learning, while a much more chaotic system, isn't all that different from neural network based machine learning algorithms at a fundamental level. And these algorithms are just as capable as humans at making dumb decisions.
So it isn't really about computing power, is it?
Maybe, maybe not. This is my point. We don't (and I'd argue: can't) know what it really even is "about."
The tests required to understand consciousness will have much to do with the mechanics of the environment, hormones and the body and all their inputs into the brain creating random fluctuations into the "brain" creating true randomity and uniqueness.
The beginning steps of doing so would be to introduce a high entropy RNG into the system such that the calculations being made at a given time are effectively unknowable. What a concept! This would be a huge step forward if it weren't for the fact that this was being done pretty much since the beginning of AI research.
*I probably shouldn't be too snide about this. Most people coding these AI's are lazy and just default to using the built in RNG in whatever language they're using. But the core point here is that this is nothing new.
Having a computer study language and peoples responses is proof of AI failure. With true AI, you would not need to input ANY language or outputs, it would chose and decide to do so.
I fail to see your logic. Parents teach their children the basics of language. Those basics are taught formally so that they have a sufficient springboard to reliably master it in the informal context of just existing in the world. As is the case with glorified chatbots that have existed since the days of ELIZA almost 30 years ago.
 
I think you entirely missed the grander concepts of my comments and there isn’t much point in discussing it further.

Bottom line is AI and consciousness are correlated and computing power isn’t the issue.

We are quite far from creating it unless a biological element is introduced.
 
Well, if we have something act in every single way like a conscious being then what's the point of distinguishing it? Let's say we have gun for example. If it shoots like a gun, has the outline of a gun and we can use it like a gun then what are we bothering with trying to determine if it's a gun?
I don't think the gun analogy applies as an example.

The difference being that something that looks like a gun, shoots like it, is made of the same materials, etc, would qualify for it to be a gun under our definition, but with what we're concerned, (a conscious entity that feels): the puppet looks like a human, talks, acts and behaves like one, but is it conscious? Can it "feel"?

The puppet must meet that last criteria in order to be considered conscious, under the definition. Whether is important to know or not is irrelevant, we're concerned with the actual answer. At least that's the point of the experiment, even if ultimately is impossible to answer. Just like ultimately I won't be able to prove if you are conscious, or even if you're real, and with other philosophical questions, which will never get an answer to, yet people are concerned and thoughtful about it.
 
Back