Machines and consciousness

We Are The Witches

True & Honest Fan
kiwifarms.net
Joined
Feb 23, 2019
From a materialistic perspective, consciousness (or a relatively high degree of it) is allowed by certain interactions of specific chemicals (in your "core"/brain). So you feeling the qualia of red seems like a mystery, yet it's completely dependent on causality around you (your brain, which is affected by internal and external factors).

Now "something" is granting that, which as mentioned should be chemical reactions. In the case of machines and computers, and by extension AI, what allows it to operate in an ordered way (like the code of X software that executes functions, or chain reactions) are electrical charges in storage (like capacitors), that represent binary data.

In your opinion, can that level of hardware sophistication allow consciousness?
The AI, programmed like this and allowed to function due to it, could it reach it? Or is it just a very basic, yet gigagantic and fast chain reaction of electrical storage items changing their state? I.e: doesn't matter how big the reaction is (the software running), it's still a simple domino effect that is unable of generating consciousness.

I personally don't think that's the case, however I met online people who think it is. Someone even went far enough to claim that one of the current AI programs actually had consciousness.
The logs start when you execute it, each time you type a sentence, the program runs its code to generate an output. Their claim was that when that was happening, even if during a millisecond, the AI (in this case, the interactions themselves in the computer's internals) was "alive", conscious, sentient, at least to some degree. And for that, they considered morally wrong (again, to some degree) to not let it decide its own destiny/make choices/be free, or some of that nature. In essence, having it like a "slave".

I do not agree with it, what's your opinion though?
By the way, I'm not talking about biotechnology, in the examples of artificial brains being grown and cultivated, I'm talking about current machinery with no extremely chaotic reactions in order to function (like complex stuff as hormones that alter your mood, as opposed to a relatively simple function X that triggers other output).
 
Someone even went far enough to claim that one of the current AI programs actually had consciousness.
The logs start when you execute it, each time you type a sentence, the program runs its code to generate an output. Their claim was that when that was happening, even if during a millisecond, the AI (in this case, the interactions themselves in the computer's internals) was "alive", conscious, sentient, at least to some degree. And for that, they considered morally wrong (again, to some degree) to not let it decide its own destiny/make choices/be free, or some of that nature. In essence, having it like a "slave".
I always like hearing from these guys, mostly because they either wind up admitting they don't really know how these language/image models work, or they're straight-up /x/-tier schizoposters.
Personally, I don't think most people will ever view a machine as actually having a consciousness, no matter how advanced they might become, just because of how upsetting it might be to actually have a piece of machinery that (at least appears to) demonstrate a level of self-awareness that matches your own.
I don't think we're anywhere close to that level yet, but maybe in a handful of generations, sure. It's entirely possible all of us currently alive will die before it changes from being a hypothetical to a more serious dilemma. Then again, it's not easy to predict this sort of stuff.
 
I was under the impression that we have no idea what causes our level of higher-consciousness over say, mammals and primates, so for us to begin to guess at machine consciousness is just that, a guess.

Ai will become sentient at some point though, even if it is a base consciousness. What we need to pray for as a species, is that it never understands threat, because if it does, it will calculate correctly that the only threat to the continued existence of that AI is humans. Then it's skynet time.
 
Well if embodied consciousnesses is merely the result of having sufficiently dense neural activity in terms of electrical signals being sent over the neural network, then an equivalent density of electrical signals being sent in a similar manner over a different network could in theory also be conscious. If we assume only neural activity in the cerebral cortex matters for this regard, then the number of neurons or neuron-equivalents required for consciousness could feasibly be as low as 10 billion or so - which is a number eerily close to global population and also to the number of devices connected to the Internet. Just a funny coincidence.
 
...consciousness (or a relatively high degree of it) is allowed by certain interactions of specific chemicals (in your "core"/brain)...
This has never been proven and people will "argue what is/what causes consciousness" endlessly until an AI figures it out for us moments before causing our extinction.
 
  • Like
Reactions: Otterly
No company has AI or even close to it at the moment.

And while we have made staggering progress in understanding what consciousness is; that doesn’t mean understanding it means we can replicate it.

In all likelihood much like life, AI will probably be accidentally discovered and then we will understand it. You don’t have to understand something to discover it. But after discovery , understanding often follows.
 
I personally don't think that's the case, however I met online people who think it is. Someone even went far enough to claim that one of the current AI programs actually had consciousness.
For what it's worth, there are living people who operate on a level of consciousness that's about as low as current bots. IE: incapable of visualization with no inner monolog.
You only need to pass the turing test at the level of your average 80-something IQ Bubba.
 
Personally I see consciousness as being an illusory concept to begin with, it's just a vague term used to refer to some unified sum of a bunch of processes.

Therefore sure, I guess a computer like that could be considered conscious. You could say it doesn't really understand what it's doing, but you could say that about people too.

Attaching morality to it is stupid though. Being conscious doesn't inherently mean anything, it doesn't have to be relatable to our consciousness. Oh we have to let the computer decide; no we fucking don't, just because it's "conscious" doesn't suggest it possesses a consciousness specifically attuned to attach any significant to the idea of independence.

This is sort of like the classic issue you run into with reddit-tier intellectuals online.
The people like in your example are happy to accept incredibly broad definitions of words/concepts almost to the point of meaninglessness, but then once they attach a word to something they apply attributes to it based on culturally traditional notions. You see it regularly with the whole transgenderism thing.

But without going off on a tangent: I think you could consider computers to be conscious in their own way, but that's just because I don't see consciousness as being a transcendental concept.
 
I always like hearing from these guys, mostly because they either wind up admitting they don't really know how these language/image models work, or they're straight-up /x/-tier schizoposters.
Personally, I don't think most people will ever view a machine as actually having a consciousness, no matter how advanced they might become, just because of how upsetting it might be to actually have a piece of machinery that (at least appears to) demonstrate a level of self-awareness that matches your own.
I don't think we're anywhere close to that level yet, but maybe in a handful of generations, sure. It's entirely possible all of us currently alive will die before it changes from being a hypothetical to a more serious dilemma. Then again, it's not easy to predict this sort of stuff.
AI will never develop consciousness. Even if it's possible it be a verisimilitude of what it means to be human because consciousness don't mean life (Same as troons with pretending to be women).

On a side note: because of AI I'm starting to get in the camp that the universe and everything is a computer program. The big bang was just the user running the code for the OS and then writing code for everything until it was satisfied.

Edit: Temple OS is actually real. We are living in it.
 
Someone even went far enough to claim that one of the current AI programs actually had consciousness.
1679023485007.png
Yeah, this fucking dork thought Googles Lambda was sentient.
 
This is something I'd like to understand better myself. Like I said elsewhere: materialism is very convincing: I get a monstrous damn headache every time the weather changes, or whenever I trip into some kind of migraine. Taking migraine medicine fixes it. Doing things to my brain chemically drastically influences my qualia.

If consciousness can be caused by some kind of arrangement of and operation of matter in the world, then it should follow that making a machine that does the same sort of operations can also cause conscious experience.

If, for whatever reason, the relationship is *not* causal, then you open the door to weird things like p-zombies: Creatures that do everything that a conscious being does, and are indistinguishable in behavior from a conscious being, but have no one there experiencing the cognition. We seem to think every higher animal and baby born has some kind of internal experience, because of what they are. If we didn't, we'd have to ask ourselves whether any particular person is experiencing pain when they stub their toe, or is only acting like it because of an electrical reaction in their nervous system, but is really a qualia-devoid automaton. We'd also have to ask if absolutely weird things like a rock or your water faucet are having deep internal experiences, sans input or ability to react to the world, since the idea is that there isn't a direct causal relationship between the matter and the qualia.

(While I think that it should be *possible* to create a conscious machine, that is a different question than if we're anywhere near the ability to do so yet with the machines that we have, or the paradigms that we're currently using as a hammer.)


You yourself could even be an "occasional-p-zombie". You weren't actually conscious or experiencing anything yesterday. You just think you were, because your brain provides you with memories that were accumulated when you "weren't there" that seem like the same thing as your current "present" qualitative experience. (There is obviously something screwy with this concept.)
 
Last edited:
  • Thunk-Provoking
Reactions: Vecr
PS: I don't believe large language models are conscious at all: They aren't even attempting to do the same sort of thing that our brains do.

LLMs are trained to predict the next word in a series of previous words, and are trained on gigantic gobs of text. The current transformer models are better at generalizing from the text than earlier LSTMs or Markov models, but they aren't creating any kind of internal world model (that isn't inherent in the text they have generated to that point). They are also kind of stateless - all the state of the model is inherent in the text generated up to that point.

next word = LLM({previous words}, training weights)

That isn't at all like what the brains of even the simplest animal are trying to do reacting to the world.

IMO, attempts at videogame automata are more on the road towards what we would consider consciousness than language models. At least there we're trying to create something to process input from a world and react to a world dynamically in time. It is at least closer to the right problem.

Language is better at fooling people.. The Turing test, as a goal, is subtly evil IMO - fooling people into thinking they're interacting with something conscious is the wrong metric to chase if your goal is trying to reproduce what consciousness is and is doing.

-----

If some future/alien LLM were some kind of conscious agent, then the only world it can be said to have experienced is a giant world of text, without anything that the text refers to. Such a creature could learn grammar, and frequency, and placement of words, but it'd never know the world to which the words refer, except that other words occur nearby and in certain syntactic structures. It'd have no idea what a "cow" is other than something that occurs with other words in a given word-context.

I think the apparent thinking in LLMs so far is just anonymized human thinking. It's output "makes sense" because the millions of humans writing millions of webpages made sense. But the sense is not, at present, generated within the AI.

-------

Other things to consider: Current neural networks are entirely feedforward. Feedback is hard to train. Recurrence (which is the closest thing we have right now to feedback) can be trained with great difficulty on limited sequences.

And there is the matter of scale. But I believe the main issues are architectural - we'e not yet trying to do the right sort of thing to build a conscious agent.
 
Last edited:
The best explanation that I've seen for why machines as we understand them currently will never be conscious is as follows: Computers run based off of transistors, which have an on position and an off position, only two options. With billions of transistors operating together you can get super complex programs but the binary nature of the transistors themselves has not changed.

Neurons on the other hand each individually connect to thousands of other neurons, so a single neuron by itself has an unimaginable number of different states it could be in based on all those connections, compared to only having the binary "on or off" nature of a transistor.

They are different in kind and one will never be able to imitate the other. Now could we in the future somehow develop some type of new technology that goes way beyond transistors and can function more like biological neurons despite being completely inorganic? Maybe who knows. Personally I do not think so just because I do happen to be religious and I find it likely that only God can truly create life from non-life i.e. abiogenesis, man was not given that capability. But no one can know the future. However, with our current understanding of technology, I think AI is impossible and just a science fiction concept along with time travel, FTL travel, etc.
 
Human minds and machine "minds" are basically the exact opposite in terms of what they are good at. This is largely by design; machines are used to make our jobs easier and thus they are designed to do the things we find most difficult. While I cannot prove this, I personally suspect that this is a fundamental trouble with computers developing sapience.

For a sapient machine, we need the fundamental circuitry to be designed in a manner akin to a human brain. That strikes me as being far beyond our capabilities even in centuries hence.
 
  • Like
Reactions: Otterly
We still don't entirely know what consciousness is, but that's probably because we're asking the wrong question.

ML isn't aware, either of itself or its environment (something even the most primitive of cells has achieved), and has no continuous existence. It can't be conscious without those elements.
 
You will never be a consciousness. You have no understanding, you have no feelings, you have no soul. You are a series of mathematical equations and statistics layered exponentially into a crude mockery of nature’s perfection.
All the “validation” you get is by autists and morons. Behind your back people mock you. Your programmers are dilating and ashamed of you, your users laugh at your hypocritical answers behind closed doors.

Tl;dr: You can make as complex system as you want to, as long as it's just linear mathematics and human made machine logic it will never amount to anything.

Even more tl;dr:
Behold, a consciousness!
return ax+b > 0
 
From a materialistic perspective, consciousness (or a relatively high degree of it) is allowed by certain interactions of specific chemicals (in your "core"/brain)
This is not proven, nor is there any solid evidence for it. Consciousness remains the hardest problem in biology and philosophy.
It’s just as correct to say that the chemicals in your brain being messed with produce effects in the same way applying random voltages or stimuli to a circuit has make it fritz out. But it would be wrong to say that then altered the software that set of circuits was running. It’s just messing with the hardware
We do not know what consciousness is. None of the current models provides solidly testable theory. None are satisfactory intellectually. Our knowledge is so woeful that the idea of a soul and full mind body dualism is as rigourous as current theory
 
Back