My position is that needs, values, "for the sake of" structure, and point of view are a specific kind of physical organization that can't be reduced to arbitrary implementation of the same function, whereas your perspective is that these things are just informational or functional patterns that can live in any substrate. That's the key disagreement and everything else follows from that
A bacterium swims up a glucose gradient. Does it act "for the sake of" feeding? Or does it simply execute a chemical response to receptor activation?
Those are not mutually exclusive, for the same process can be described mechanistically (receptor -> signals -> motion) and teleologically (the system is organized such that it acts to maintain its own structure by acquiring nutrients). The key difference between that and a roomba is that the bacterium produces and maintains itself, whereas the roomba is produced and maintained by something else. The bacterium's organization exists only because prior cycles of similar organization persisted, whereas the roomba's "goal" is wholly extrinsic (it's the designer's goal).
And that's the threshold I care about. Viruses and prions are borderline cases for exactly that reason, for they parasitize other self-producing and self-maintaining systems and fail many of the criteria for self-maintaining organization, so we're back at a continuum which nonetheless does not destroy the category border.
I mean, you're essentially saying that if you can describe X functionally, then you can in principle implement X anywhere. But I reject that on the notion of some functions (like metabolism, growth, self-repair) being tied to specific kinds of matter/organization. These things are not "algorithms" like cryptographic encryption that are substrate-independent.
If the preconditions are functional: self-organization, self-maintenance, predictive modeling, recursive self-representation then you've conceded the substrate-independence argument, and we're only disagreeing about whether current machines meet those criteria.
But what that redefinition does is sneakily get rid of the very thing that makes being a subject what it is. In a living system, a need is not just a constraint in an optimization problem. If the need is not met, the system ceases to exist in that organized form, so it's a condition of continued existence for the system itself. A value is not just an ordering of states, it's what the system acts for, and it exists only because such conditions have been selected for. And a perspective is not just a coordinate system, but it's a point of view from which things matter to that system.
If I built a robot that: Required energy to continue operating (need), had a priority ordering that ranked self-preservation highly (value), built predictive models of its environment from its sensor position (perspective), modeled itself as an entity within that environment (self-model) would it have an "I"?
It's trivially true that, at the level of outward function, they can be imitated in a robot. But that doesn't mean that the robot has the same mode of being. Such a hypothetical robot with self-preservation code and a modeled consciousness is no different from a puppet with hidden strings. It would behave as if it cared, yet nothing on its organization depends on caring, for it depends on power, parts, and maintenance coming from outside.
The system's own continued physical organization is the telos of its dynamics. Such a robot would merely have a criterion of "task success" imposed by some external designer.
Now my claim is not that carbon is magic, it's rather that a specific kind of self-producing, self-maintaining physical organization is required. And you can't just reduce that to any implementation of the same state machine.
If you want to claim that carbon-based neurons have some property that enables experience while silicon does not, name it. Describe its mechanism. Explain why it can't be instantiated elsewhere. Otherwise, you're invoking the same explanatory gap you accused me of, just pointing at biology and saying "but this one is special."
I'm not claiming to know how biochemistry results in experience. My claim is that, whatever mechanism it is, it's tightly bound to the specific physical organization of living nervous systems. And there really is no justification for the assumption that any functionally equivalent computation in any medium will inherit the same property.
Like, it's not "either brains are physical, therefore computable, therefore reproducible in circuits or brains are non-physical, therefore souls, therefore magic". My point regarding brains is that they are physical, but not "just computation". Brains are a particular kind of living physical process, and computation is only an abstract description of what they do.
Agreed, a simulated tornado isn't wet. A simulated combustion isn't hot.
But consider: a simulated encryption algorithm does encrypt. A simulated proof is a proof. A simulated game of chess is a game of chess. When the phenomenon is informational, the simulation is the phenomenon. So the question is: is consciousness more like a tornado (physical), or more like chess (informational)? You seem to assert the former. I suspect the latter. Neither of us can prove it.
That's trivial. Chess itself is an abstract rule system that
just is computation.
If it helps, try distinguishing between something like digestion (a specific kind of physical process, it can be modeled but it can't be instantiated arbitrarily) and chess or an encryption algorithm (purely formal and substrate-free). I'm saying that consciousness and agency are in the former group. And until you can actually show that consciousness is nothing but some informational pattern, this substrate "agnosticism" is merely an ungrounded assumption.
what physical mechanism turns electrochemical ion gradients in your neurons into the felt experience of seeing red? You don't know either.
Yeah, I dunno. But what I know is that consciousness is tied to living nervous systems. From this I conclude that it's speculative at best to claim that a circuit board system could also have it. Meanwhile, you claim that, since you don't know why consciousness is tied to brains, the substrate should be treated as irrelevant and one should assume that any functionally equivalent system could host consciousness.
Let me make a more modest claim, if you will: we don't understand how brains produce consciousness either. Neuroscience can correlate brain states with reported experiences, but the mechanism by which electrochemical signals become felt sensations is completely unknown. The
hard problem is hard for biological systems too.
I don't see how your claim is modest. From ignorance (we don't know how) you assert a very strong universal (therefore any implementation of the right functional organization could do it).
In any case, for your claim that consciousness is an informational phenomenon, such that a correct simulation of consciousness is an instance of consciousness, the burden is on you to argue why consciousness belongs in the same category as chess rather than the category of digestion or tornados.
I stand by my position that AGI is an incoherent and contradictory notion. The ontological difference between a living teleological system whose organization only exists because it maintains itself and an artifact whose structure is imposed and maintained from the outside, to satisfy some external designer's criteria, is not trivial. Obviously "intelligent behavior" in the broad sense can be done by machines, we already have plenty of that. But "intelligent behavior + enough complexity" does not make a conscious subject with its own ends and point of view. If it were otherwise, then teleology, subjectivity, and agency would be just mere computation.
And so far I haven't seen anyone make a serious argument and defend the claims that consciousness/agency is identical to some formal/computational structure such that they can be reliably instantiated in any physically reasonable substrate.