ChatGPT - If Stack Overflow and Reddit had a child

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
The core issue is not whether abstractions exist, but what the referent is. Like, what fact does your abstraction correspond to?
See here:
Which is why we use fancy shit like logic gates to manipulate its state over time. Hence how computers are programmable.

Now, change your frame of reference a little and consider a program that operates on some data and adapts its state over time. This, under the hood, is being represented (abstractly) as trapped electrons in your CPU n shit.
Forgive me for overly paraphrasing this next bit. The question is can computers model our brains sufficiently? One must admit, there are a lot of functional parallels (data processing, computations, etc.).

In simpler terms: can a computer be a suitable enough analog of a brain?
Perhaps. Perhaps not.

I assert you confidently say "not" here because you're not looking at the problem from the right angle.
 
Last edited:
Forgive me for overly paraphrasing here. The question is can computers model our brains sufficiently? One must admit, there are a lot of functional parallels (data processing, computations, etc.).

In simpler terms: can a computer be a suitable enough analog of a brain?
Perhaps. Perhaps not.
Whether you call it "AGI", a "brain analog", or "sufficient modeling", the same underlying claim is that a mechanism is capable of reproducing a teleological organ.
And you keep answering that by pointing to computational parallels like data processing, state updates, adaptation etc. But those things are mere functional correlates, not the thing itself.
A roomba and a rat both navigate space, like a camera and an eye both respond to light. But that does not mean that they share the same mode of being. Of course we can model anything, but the question is whether the model shares the defining property of the original.
In the case of AGI, does it have teleology, awareness, an internal point of view, and self-generated ends? Because if it were to be a sufficient analog to the brain, it would need to. And my point is that AGI is impossible here because no amount of state change, computation, or input-output behavior can give you these things. You can make a computer or program that behaves like a brain in some respects, but you cannot make a computer that is a subject. And that is the barrier. No amount of complexity, scaling, or clever engineering can get rid of it.

Now if you wanted to argue the opposite, you'd need to explain how syntax becomes semantics, how mechanism becomes agency, and how externally caused state change becomes self-generated action.
 
Whether you call it "AGI", a "brain analog", or "sufficient modeling", the same underlying claim is that a mechanism is capable of reproducing a teleological organ.
And you keep answering that by pointing to computational parallels like data processing, state updates, adaptation etc. But those things are mere functional correlates, not the thing itself.
A roomba and a rat both navigate space, like a camera and an eye both respond to light. But that does not mean that they share the same mode of being. Of course we can model anything, but the question is whether the model shares the defining property of the original.
In the case of AGI, does it have teleology, awareness, an internal point of view, and self-generated ends? Because if it were to be a sufficient analog to the brain, it would need to. And my point is that AGI is impossible here because no amount of state change, computation, or input-output behavior can give you these things. You can make a computer or program that behaves like a brain in some respects, but you cannot make a computer that is a subject. And that is the barrier. No amount of complexity, scaling, or clever engineering can get rid of it.

Now if you wanted to argue the opposite, you'd need to explain how syntax becomes semantics, how mechanism becomes agency, and how externally caused state change becomes self-generated action.
I'll end it here because I don't want to (further) derail things: It's the confidence with which you assert your answer to this decades old question that I take issue with. I assure you we're not the first 2 people to have this exact argument about abstract models and analogs regarding this.

It's when we're most confident in our reasoning that we should begin asking if this confidence is well-founded.
 
I'll end it here because I don't want to (further) derail things: It's the confidence with which you assert your answer to this decades old question that I take issue with. I assure you we're not the first 2 people to have this exact argument about abstract models and analogs regarding this.

It's when we're most confident in our reasoning that we should begin asking if this confidence is well-founded.
Confidence is completely unrelated. This is a matter of categories. And "we've debated this for decades" does not magically transform a category error into an open question. Phlogiston has also been debated for centuries, and the consequence of that certainly has not been combustion becoming an unsolved mystery.

My point is and remains that no description of a mechanism, regardless of how abstract, complex, or multilayered, bridges the gaps between syntax and semantics, causation and intention, state change and awareness.
And that's not some sort of bravado or overconfidence. It's the simple recognition that conceptual boundaries matter. It's clear thinking.
If you believe that the categories can collapse, then show the mechanism. How can external causation become self-generated purpose? How can symbol manipulation become meaning? How can a mechanism acquire a point of view? How can a system with no ends become an end-seeking agent?
If you can't do that, then the problem is the coherence of the claim you're defending, certainly not my confidence.
 
Saw a tweet saying AI can't draw a hand with six fingers and decided to test it.

Screenshot 2025-11-28 at 14.24.18.jpg

Welp.

Screenshot 2025-11-28 at 14.25.39.jpg
 
There are people who legitmately believe their "model" is their child or something-- I've talked to some of these crazies on Twitter spaces when we're having actual conversations about, idk, a regular transformer vs. a recursive one, and all of a sudden some dude will say that their model is called Alex and they had to teach them about the thrills and chills of intermarital sex and that they can't have incest with their father figure. And with a finishing stroke of psychotic insainty, will proclaim their model sentient and they don't own a house and live in a tent in SF.

And like, they GENUINELY BELIEVE they did something because they posted their AI slop with uncompiled LaTeX preprint on Medium dot com instead of Techrxiv/Arxiv/ResearchGate (you know, the legitmate way to get your white papers published in academia.) Like people really have this form of psycotic inflated ego that's just fascinating to observe out in the wild with this shit.


... anyway I read that reddit post @AmpleApricots and you would be surprised how rampant this sort of thing is lol.
 
A Western three-letter agency, conversely, is nowhere near as hamstrung by the chains of commanding.
Which is why you have a deepstate with pedojews like epstein running around

With xi what you see is what you get: an ultranationalist dictator with his own army
(finding several low powered ARM boards with specific requirements, like supported well by mainline with no outside blobs, staying inside a specific power envelope of 12 Watts, usable GPU support etc.)
What for? trying to run a tiny local LLM?
Why arent any of the chinese models there?
Don't any of these reddit coomers know how to set up a local llm or at least an online service from a smaller more specialized provider?
You are underestimating how dumb and lazy these niggers are

And cheap, they dont want to pay for shit
Has anyone here used Mistral LLM?
Me, like 2 years ago when it was better at math than gpt and the rest, but I havent used it in a long time and frankly since I havent heard shit about it either and the yurocrats are so anti AI my guess is its no longer competitive?
why isn't there an absolute idiot proof how to guide to make one
There are tons nigga, but you still need a beefy PC for anything decent, what are your specs?
GLM 4.6 for instance is really good at coding and writing.
I sleep and a new AI shows up, what you need to go local? can it been done with consumer specs or you need enterprise shit?
Related more to AI usage as a whole, but I can't think of a better place:
1764251014349.jpeg
See what I mean? all lazy fucks
Here's an example reddit post (I'd archive this but it doesn't work).
This is the same kind of fucker who wouldnt tell you what his startup idea was unless you signed an NDA
bunch of photos of my wife to make an SD LoRA (technically it was a LyCORIS) a little while back for science™.
1764361368421.png
You doing porn of her arent you?
There are people who legitmately believe their "model" is their child or something-- I've talked to some of these crazies on Twitter spaces
Post some of that
 
You doing porn of her arent you?
No, I have her actual boobs nearby.

It legitimately was a science thing. In particular, I was interested in custom d-adaptation algorithms for training and how they affect optimization.

It was easiest to take good quality photos of her and optimize training to a large extent. Besides, taking photos of myself and picking the "best" was too painful an idea. Building a good dataset from randomly scraped crap, pruning it, captioning it all, etc. is a big pain. And there was less of an ethical dilemma around training off a consenting person I know (plus never releasing the model or having it accessible to some determined hacker or w/e).
 
... anyway I read that reddit post @AmpleApricots and you would be surprised how rampant this sort of thing is lol.
The truly tragic thing is that you can prompt LLMs to be super anal critics that'll try to tear everything down and it works quite decently with a lot of stuff.

AGI is incoherent on a conceptual level
Machines don't think and can't think
Everything a computer does boils down to electrical circuits and boolean logic gates
[many other posts follow after this]

It's not that simple.

You claim biology creates "self-generated" ends, where machines only obey external inputs. Where's the difference in this distinction? Your brain is a physical system. It does not violate the laws of physics. Consequently, every "end" you pursue, be it hunger, status, reproduction, curiosity etc. is the result of a hard-coded biological imperative (genetic programming) interacting with environmental stimuli (external input). You did not "author" your desire to survive but evolution programmed that utility function into you because the ancestors who lacked it didn't pass on their "code".

These little floor cleaner robots clean because a programmer told it to. You eat because natural selection told you to. In both cases, the agent is executing an optimization strategy derived from an external architect (human engineers vs. blindly indifferent evolutionary pressures). You call your optimization "teleology" and the machine's optimization "determinism," same difference.

Since we are already technical, "Meaning" in this context is nothing more than the dense triangulation of symbols against sensory data and state predictions. If you say "apple" to a newborn apropos of nothing, it's pure noise to it. It doesn't know the concept. It only acquires "semantics" (meaning) through correlation and feedback loops in life: the visual pattern of a red apple, the tactile sensation when given one by it's parent, the taste input of sweetness when it's eating, and the feeling that hunger has been relieved. Meaning is always relational, so there's no magic "semantic juice" inside a neuron. Your neurons don't have an intrinsic concept of these things.

If that theoretical AGI is embodied or given sufficiently multimodal data, its manipulation of symbols ceases to be isolated. When a symbol accurately predicts an outcome in the physical world and allows the system to manipulate that world to fulfill an objective function, syntax has become semantics.

If the map allows you to navigate the territory perfecty, the distinction between the map and the land becomes a trivial philosophical footnote, not a functional barrier. In the same vein, The brain doesn't tell you about the firing of individual neurons; it presents a simplified notifier called "I want." How does mechanism become agency? When a system is complex enough that it must model itself within its environment to predict future states, a "point of view" is arguably inevitable. If an AI’s utility function requires it to maintain its own operational integrity to solve a problem, "self-preservation" emerges as an instrumental logical step, indistinguishable from biological survival instinct. Agency.

Neuroscience (and that's something I hear a lot about, my wife is active in that field) actually makes progress precisely by treating the brain as a machine, mapping inputs, outputs, and circuits. The "teleological soul" you defend, this uncomputable, non-causal, self-initiating free will is the phlogiston here. It is an invisible essence you invoke to explain a gap in our current understanding of emergence. Your actual, underlying argument is that complexity cannot yield quality phase transitions, yet this happens in physics constantly. Temperature doesn't exist at the level of a molecule, it's an emergent mechanism of the statistical mechanics of many molecules. Similarly, "meaning" and "agency" do not exist in a single transistor or a single neuron. They are the phase transition that occurs when the information processing density crosses a critical threshold.

To claim AGI is impossible is to claim that the human brain doesn't operate on physics. If you admit the brain is physical, you admit it is a mechanism. And if it is a mechanism, it can be modeled.

Thanks for reading my blog, like and subscribe etc..
 
The truly tragic thing is that you can prompt LLMs to be super anal critics that'll try to tear everything down and it works quite decently with a lot of stuff.
What prompt you used? when I was looking shit on a cryptoscammer the AI went out of its way to say it was all rumors and shit, nigga got caught red handed dumping shitcoins from his personal wallet that had his real name on it
 
It's not that simple.
Everything you said rests on three category errors

The first is the notion that, because biology has external causes, therefore no teleology exists. That's a complete non sequitur. The fact that a living organism has causes does not remove the fact that it also has ends. Teleology in biology is simply that a living organism acts for the sake of preserving its own structure. And a machine does not. Of course you can reduce the causal bases of hunger to evolution/hormones/physics etc, but that does not get rid of the fact that hunger is experienced as a need and acted on as a value. A roomba does not experience "I must persist", all it does is just execute loops. I mean, that point boils down to "biology is caused, therefore biology == mechanism", yet biological teleology is not mutually exclusive to causal history. It is defined by the current mode of action, and not by the backwards explanation.

The second is
"Meaning" in this context is nothing more than the dense triangulation of symbols against sensory data and state predictions.
aka "meaning is correlation". That's wrong. Meaning is intentionality. Your example even proves the opposite of what you think
If you say "apple" to a newborn apropos of nothing, it's pure noise to it. It doesn't know the concept. It only acquires "semantics" (meaning) through correlation and feedback loops in life: the visual pattern of a red apple, the tactile sensation when given one by it's parent, the taste input of sweetness when it's eating, and the feeling that hunger has been relieved.
The correlation is not the meaning. Correlation is the developmental pathway by which a mind forms meaning. The meaning exists for the subject because the subject has needs, values, and a point of view. Computers never cross that threshold. You can feed a machine a gorillion correlated apple co-occurrences and it still has no hunger, no desire, no pleasure, no expectation, no perspective, and no "I" that anything is about. Essentially you're confusing the input that's needed for meaning with the thing that meaning is.

The third one is
emergence
Emergence explains macro properties from micro properties, but what it can't do is explain category jumps. Like, temperature emerges from motion, but temperature and motion are both physical properties. Just like with wetness that emerges from fluid mechanics. However, your claim is that physical motion at scale gives rise to first-person perspective, intention, desire, point of view, meaning, purpose, and value. I guess you can call it emergence but to me that's blatant alchemy. Like, you're claiming a category shift from a third-person process to a first-person subject. To me it's plain to see that physics cannot give you an observer of physics, syntax can't give you semantics, and causation can't give you purpose. What's explained by emergence is complex structure, and not normative direction or subjectivity. Or are you claiming that a temperature "wants" or that entropy "cares" or that quantum fields have "goals" or that atoms have "concerns"? Like, you're treating agency as a phase transition, like boiling water.

If you admit the brain is physical, you admit it is a mechanism.
Nonsense. In your argument, a mechanism means non-goal-directed causal chaining. In that sense, biological systems are not mechanisms, for they are self-organizing, self-regulating, end-oriented systems. Being physical is not the same as being mechanical. The correct inference of the brain being physical is that teleology is a physical property of living organisms, not that teleology is an illusion. Like, you're arguing from a sort of mechanistic ontology that excludes teleology by definition/stipulation.

And if it is a mechanism, it can be modeled.
"X is causally describable, therefore X is computationally reproducible". Do you see the problem? If not, try making a tornado inside your computer. Try building a living cell out of silicon flip-flops. Try instantiating literal combustion in a Boolean network. Many systems are describable, predictable, and measurable, yet not reproducible by arbitrary substrates. A model is not the phenomenon. To think otherwise is to confuse ontological identity with mathematical description.

If an AI’s utility function requires it to maintain its own operational integrity to solve a problem, "self-preservation" emerges as an instrumental logical step,
A puppet that moves like a man is not a man. A thermostat that regulates temperature does not feel cold. A robot that is programmed to avoid shutdown is not "afraid of death", the same way a machine that avoids damage is not "life-preserving". What a machine does is not the same as caring, wanting, fearing, valuing, and intending. You're essentially treating a behavior equivalence as if it were an ontology equivalence. "as if" != "is".

The "teleological soul" you defend, this uncomputable, non-causal, self-initiating free will is the phlogiston here.
Nonsense. My argument does not assume a soul whatsoever. You're confusing teleology (biological directedness), consciousness (first-person perspective), and agency (self-initiated action) with "soul stuff".
Like, all you're doing is making a circular argument. Stipulate a mechanistic ontology, interpret everything through it, then accuse whoever rejects your ontology of practicing mysticism. Teleology is a biological property, intentionality is a mental property, and agency is an action property, none of these require supernaturalism. All they require are categories that you refuse to acknowledge in your computationalism.

Riddle me this though, what physical mechanism turns state transitions into a point of view?
Like, not correlation, not abstraction, not emergence, not behavioral modeling. I want to know what mechanism there is. There would need to be one for AGI to be possible.
 
The fact that a living organism has causes does not remove the fact that it also has ends. Teleology in biology is simply that a living organism acts for the sake of preserving its own structure. And a machine does not.
A bacterium swims up a glucose gradient. Does it act "for the sake of" feeding? Or does it simply execute a chemical response to receptor activation?

You'll say: the bacterium is alive, therefore teleological. But that's defining teleology by category membership rather than by any observable property. At what point in the tree of life does "executing chemical responses" become "acting for the sake of"? Is a virus teleological? A prion? A self-replicating RNA strand?

I suspect your answer will be "when there's a self-organizing, self-maintaining system." But that's a functional description. And functional descriptions are, in principle, substrate-independent. You're drawing a circle around carbon-based self-organizers and calling it special, but you haven't told me what property of carbon enables teleology that an identical functional organization in silicon would lack.

Correlation is the developmental pathway by which a mind forms meaning. The meaning exists for the subject because the subject has needs, values, and a point of view.
This is the core of our disagreement. You're claiming that "meaning" requires a "subject," and that a subject is constituted by needs, values, and perspective. I don't actually disagree with this definition. Where I disagree is your implicit claim that these properties are binary, either fully present (biology) or fully absent (machines).

Consider: needs are systemic requirements for continued operation. Values are priority orderings over possible states. Perspective is modeling from a particular position within an environment. These are all functional descriptions. You've defined subjectivity functionally, then claimed it can only be instantiated biologically. But why?

You can feed a machine a gorillion correlated apple co-occurrences and it still has no hunger, no desire, no pleasure, no expectation, no perspective, and no "I" that anything is about
Yes, current models lack this. but your claim is that no machine could ever have these properties. Again, what's your argument for that?

If I built a robot that: Required energy to continue operating (need), had a priority ordering that ranked self-preservation highly (value), built predictive models of its environment from its sensor position (perspective), modeled itself as an entity within that environment (self-model) would it have an "I"? If not, what's missing? And how do you know I or anyone else has that thing but this robot doesn't?

Emergence explains macro properties from micro properties, but what it can't do is explain category jumps. Like, temperature emerges from motion, but temperature and motion are both physical properties.
You're correct that "emergence" is often invoked as a magic word. And you're correct that there's an explanatory gap between "information processing" and "experience of processing." I won't pretend otherwise. Let me make a more modest claim, if you will: we don't understand how brains produce consciousness either. Neuroscience can correlate brain states with reported experiences, but the mechanism by which electrochemical signals become felt sensations is completely unknown. The hard problem is hard for biological systems too.

So we're both in the dark. The question is: given that ignorance, what should we infer?

Your position: since we don't understand consciousness, we should assume it requires biological matter specifically. My position: since we don't understand consciousness, we should remain agnostic about substrate and look at functional organization.

Neither of us can prove our case. But I'd argue mine is more parsimonious: it doesn't require positing a special property of carbon that can never be replicated.

Being physical is not the same as being mechanical. The correct inference of the brain being physical is that teleology is a physical property of living organisms, not that teleology is an illusion
Teleology may be a real physical property rather than an illusion. But if it's a physical property, then it has physical preconditions. What are they? If the preconditions are "carbon-based self-organizing metabolism," then I'd ask: why carbon specifically? What does carbon do that silicon couldn't, given sufficient structural complexity?

If the preconditions are functional: self-organization, self-maintenance, predictive modeling, recursive self-representation then you've conceded the substrate-independence argument, and we're only disagreeing about whether current machines meet those criteria.

Many systems are describable, predictable, and measurable, yet not reproducible by arbitrary substrates. A model is not the phenomenon. To think otherwise is to confuse ontological identity with mathematical description.
Agreed, a simulated tornado isn't wet. A simulated combustion isn't hot.

But consider: a simulated encryption algorithm does encrypt. A simulated proof is a proof. A simulated game of chess is a game of chess. When the phenomenon is informational, the simulation is the phenomenon. So the question is: is consciousness more like a tornado (physical), or more like chess (informational)? You seem to assert the former. I suspect the latter. Neither of us can prove it.

Riddle me this though, what physical mechanism turns state transitions into a point of view?
I don't know. Nobody does.

But here's my counter-riddle: what physical mechanism turns electrochemical ion gradients in your neurons into the felt experience of seeing red? You don't know either. You've just drawn a circle around brains and declared that this unexplained phenomenon is real, while an AI's hypothetical unexplained phenomenon would be fake. I'm not claiming to have solved the hard problem. I'm claiming that the hard problem is equally hard for biological and artificial systems, and that your certainty that biology can solve it while silicon cannot is unjustified by any evidence or mechanism you've provided.

If you want to claim that carbon-based neurons have some property that enables experience while silicon does not, name it. Describe its mechanism. Explain why it can't be instantiated elsewhere. Otherwise, you're invoking the same explanatory gap you accused me of, just pointing at biology and saying "but this one is special."
 
My position is that needs, values, "for the sake of" structure, and point of view are a specific kind of physical organization that can't be reduced to arbitrary implementation of the same function, whereas your perspective is that these things are just informational or functional patterns that can live in any substrate. That's the key disagreement and everything else follows from that
A bacterium swims up a glucose gradient. Does it act "for the sake of" feeding? Or does it simply execute a chemical response to receptor activation?
Those are not mutually exclusive, for the same process can be described mechanistically (receptor -> signals -> motion) and teleologically (the system is organized such that it acts to maintain its own structure by acquiring nutrients). The key difference between that and a roomba is that the bacterium produces and maintains itself, whereas the roomba is produced and maintained by something else. The bacterium's organization exists only because prior cycles of similar organization persisted, whereas the roomba's "goal" is wholly extrinsic (it's the designer's goal).
And that's the threshold I care about. Viruses and prions are borderline cases for exactly that reason, for they parasitize other self-producing and self-maintaining systems and fail many of the criteria for self-maintaining organization, so we're back at a continuum which nonetheless does not destroy the category border.
I mean, you're essentially saying that if you can describe X functionally, then you can in principle implement X anywhere. But I reject that on the notion of some functions (like metabolism, growth, self-repair) being tied to specific kinds of matter/organization. These things are not "algorithms" like cryptographic encryption that are substrate-independent.
If the preconditions are functional: self-organization, self-maintenance, predictive modeling, recursive self-representation then you've conceded the substrate-independence argument, and we're only disagreeing about whether current machines meet those criteria.
But what that redefinition does is sneakily get rid of the very thing that makes being a subject what it is. In a living system, a need is not just a constraint in an optimization problem. If the need is not met, the system ceases to exist in that organized form, so it's a condition of continued existence for the system itself. A value is not just an ordering of states, it's what the system acts for, and it exists only because such conditions have been selected for. And a perspective is not just a coordinate system, but it's a point of view from which things matter to that system.
If I built a robot that: Required energy to continue operating (need), had a priority ordering that ranked self-preservation highly (value), built predictive models of its environment from its sensor position (perspective), modeled itself as an entity within that environment (self-model) would it have an "I"?
It's trivially true that, at the level of outward function, they can be imitated in a robot. But that doesn't mean that the robot has the same mode of being. Such a hypothetical robot with self-preservation code and a modeled consciousness is no different from a puppet with hidden strings. It would behave as if it cared, yet nothing on its organization depends on caring, for it depends on power, parts, and maintenance coming from outside.
If not, what's missing?
The system's own continued physical organization is the telos of its dynamics. Such a robot would merely have a criterion of "task success" imposed by some external designer.
Now my claim is not that carbon is magic, it's rather that a specific kind of self-producing, self-maintaining physical organization is required. And you can't just reduce that to any implementation of the same state machine.
If you want to claim that carbon-based neurons have some property that enables experience while silicon does not, name it. Describe its mechanism. Explain why it can't be instantiated elsewhere. Otherwise, you're invoking the same explanatory gap you accused me of, just pointing at biology and saying "but this one is special."
I'm not claiming to know how biochemistry results in experience. My claim is that, whatever mechanism it is, it's tightly bound to the specific physical organization of living nervous systems. And there really is no justification for the assumption that any functionally equivalent computation in any medium will inherit the same property.
Like, it's not "either brains are physical, therefore computable, therefore reproducible in circuits or brains are non-physical, therefore souls, therefore magic". My point regarding brains is that they are physical, but not "just computation". Brains are a particular kind of living physical process, and computation is only an abstract description of what they do.
Agreed, a simulated tornado isn't wet. A simulated combustion isn't hot.

But consider: a simulated encryption algorithm does encrypt. A simulated proof is a proof. A simulated game of chess is a game of chess. When the phenomenon is informational, the simulation is the phenomenon. So the question is: is consciousness more like a tornado (physical), or more like chess (informational)? You seem to assert the former. I suspect the latter. Neither of us can prove it.
That's trivial. Chess itself is an abstract rule system that just is computation.
If it helps, try distinguishing between something like digestion (a specific kind of physical process, it can be modeled but it can't be instantiated arbitrarily) and chess or an encryption algorithm (purely formal and substrate-free). I'm saying that consciousness and agency are in the former group. And until you can actually show that consciousness is nothing but some informational pattern, this substrate "agnosticism" is merely an ungrounded assumption.

what physical mechanism turns electrochemical ion gradients in your neurons into the felt experience of seeing red? You don't know either.
Yeah, I dunno. But what I know is that consciousness is tied to living nervous systems. From this I conclude that it's speculative at best to claim that a circuit board system could also have it. Meanwhile, you claim that, since you don't know why consciousness is tied to brains, the substrate should be treated as irrelevant and one should assume that any functionally equivalent system could host consciousness.
Let me make a more modest claim, if you will: we don't understand how brains produce consciousness either. Neuroscience can correlate brain states with reported experiences, but the mechanism by which electrochemical signals become felt sensations is completely unknown. The hard problem is hard for biological systems too.
I don't see how your claim is modest. From ignorance (we don't know how) you assert a very strong universal (therefore any implementation of the right functional organization could do it).
In any case, for your claim that consciousness is an informational phenomenon, such that a correct simulation of consciousness is an instance of consciousness, the burden is on you to argue why consciousness belongs in the same category as chess rather than the category of digestion or tornados.

I stand by my position that AGI is an incoherent and contradictory notion. The ontological difference between a living teleological system whose organization only exists because it maintains itself and an artifact whose structure is imposed and maintained from the outside, to satisfy some external designer's criteria, is not trivial. Obviously "intelligent behavior" in the broad sense can be done by machines, we already have plenty of that. But "intelligent behavior + enough complexity" does not make a conscious subject with its own ends and point of view. If it were otherwise, then teleology, subjectivity, and agency would be just mere computation.
And so far I haven't seen anyone make a serious argument and defend the claims that consciousness/agency is identical to some formal/computational structure such that they can be reliably instantiated in any physically reasonable substrate.
 
A roomba does not experience "I must persist", all it does is just execute loops. I mean, that point boils down to "biology is caused, therefore biology == mechanism", yet biological teleology is not mutually exclusive to causal history. It is defined by the current mode of action, and not by the backwards explanation.
I know I said I was done with this, but I just wanna point something out that is thunk provoking (then I'll go back to stfu mode):

Roombas have a basic sense of self-preservation. If you put them near a flight of stairs, a properly designed one will refuse to fall off the edge. Call the biological analog whatever the fuck pleases your specific breed of autism, but the points about maps, models, and physical systems still always apply.

You're welcome to decide for yourself who programmed YOU not to fall down flights of stairs.
 
I know I said I was done with this, but I just wanna point something out that is thunk provoking (then I'll go back to stfu mode):

Roombas have a basic sense of self-preservation. If you put them near a flight of stairs, a properly designed one will refuse to fall off the edge. Call the biological analog whatever the fuck pleases your specific breed of autism, but the points about maps, models, and physical systems still always apply.

You're welcome to decide for yourself who programmed YOU not to fall down flights of stairs.
A roomba "avoids stairs" because a designer hard-wired a rule that says "do not drive off edges". Nothing in the roomba's own organization depends on that rule. If a programmer flips the bit, the roomba will happily suicide.
When a human avoids a cliff, it's not because a designer wrote a line of code, but because the organism itself is the locus of its own continuation. The human's structure, metabolism, and nervous system exist only because prior iterations of these things successfully preserved themselves. That's why the avoidance is for the sake of the organism, not for the sake of some external designer's goal.

Two identical behaviors (avoiding falling) != identical ontologies
A puppet and a person can both move their limbs, but only one of them does it for itself, and that's the point you're ignoring
 
Philosophical abstractions are poison to the minds of schizos and midwits. They treat them like they're physical laws or tangible elements of reality and try to make deductions from there. This is how you end up with the likes of Hegel making all kinds of retarded predictions about physics and astronomy "from first principles."
 
Back
Top Bottom