This is a fun thing to think about, I'm also on the road and broke the keyboard on my luggable somehow. The real question is, would a conscious being use a 40% ortho PCB from aliexpress? The answer *might* surprise you. But to get back to the topic
Those are not mutually exclusive, for the same process can be described mechanistically (receptor -> signals -> motion) and teleologically (the system is organized such that it acts to maintain its own structure by acquiring nutrients). The key difference between that and a roomba is that the bacterium produces and maintains itself, whereas the roomba is produced and maintained by something else.
So the criterion is
autopoiesis.
Suppose I build a machine that harvests raw materials from its environment, synthesizes replacement parts, repairs damage to itself, and produces copies of itself. No human intervention required after initial construction (Some people think that's what "ufos" are btw.. I have no real opinion on the topic and don't try to draw them into this, just a fun scifi thing to think about). Does this machine have genuine teleology? If yes: then the criterion is functional (self-maintenance, self-reproduction) rather than substrate-specific, and we're just disagreeing about engineering difficulty. If no: then autopoiesis isn't actually your criterion. There's something else you're pointing at-perhaps "arose through natural selection" or "made of organic molecules"-that you haven't made explicit. Which is it?
The bacterium's organization exists only because prior cycles of similar organization persisted, whereas the roomba's "goal" is wholly extrinsic (it's the designer's goal).
This is an argument from historical origin. You're saying the bacterium has genuine ends because its design emerged from selection, while the roomba's ends are fake because they came from an engineer. But why should historical origin determine present properties? A lab-synthesized diamond has the same hardness as a natural diamond. A human baby conceived through IVF has the same moral status as one conceived naturally. We don't normally think that how something came to be determines what it currently is. If I used evolutionary algorithms to evolve a self-maintaining system, no human designer specifying goals, just selection pressure, would it have genuine teleology? The "goals" wouldn't be any engineer's goals. They'd be whatever configurations happened to persist.
If that still doesn't count, then your criterion isn't "not externally designed." It's something else.
In a living system, a need is not just a constraint in an optimization problem. If the need is not met, the system ceases to exist in that organized form, so it's a condition of continued existence for the system itself.
This is also true of my hypothetical self-maintaining machine. If it doesn't acquire energy, it ceases to exist in its organized form. If it doesn't repair damage, it degrades. Its continued existence depends on meeting these conditions. You might say: "But the machine doesn't care whether it continues existing." Does the bacterium? What evidence do you have that a bacterium has any experiential state corresponding to "caring"? You've stipulated that self-maintenance implies caring, but that's precisely what's in question.
Such a hypothetical robot with self-preservation code and a modeled consciousness is no different from a puppet with hidden strings. It would behave as if it cared, yet nothing on its organization depends on caring, for it depends on power, parts, and maintenance coming from outside.
Two problems here.
First: the "hidden strings" objection applies equally to brains. Your neurons fire according to electrochemical laws. Your behavior is determined by physics. If deterministic causation makes the robot a puppet, then you're a puppet too, just one whose strings are electrochemical gradients rather than code.
Second: you say the robot "depends on power, parts, and maintenance coming from outside." But so do you. You depend on food, water, oxygen, and ambient temperature coming from outside. You didn't create the matter that constitutes you. You maintain your organization by importing low-entropy matter from your environment, exactly like my hypothetical self-maintaining machine would.
The bacterium is also "maintained from outside" in the sense that it requires environmental inputs (glucose, water, appropriate temperature). The difference is that it processes those inputs into self-maintenance autonomously. But so would a sufficiently sophisticated machine.
Now my claim is not that carbon is magic, it's rather that a specific kind of self-producing, self-maintaining physical organization is required. And you can't just reduce that to any implementation of the same state machine.
Then, again, please name the physical requirements. What specific properties of biological organization are necessary?
If you can name them, then we can ask whether those properties could be instantiated in other substrates. If you can't name them, if your claim is just "whatever brains have, which I can't specify", then you don't actually have a criterion. You're again just making the claim that biology is special.
I'm not claiming to know how biochemistry results in experience. My claim is that, whatever mechanism it is, it's tightly bound to the specific physical organization of living nervous systems.
This is an empirical claim. What's your evidence?
You might say: "Consciousness only appears in biological organisms." But that's an observation about the systems we've encountered so far, not a demonstration of necessary connection. Fire only appeared in lightning strikes and volcanos until humans figured out how to make it. Flight only appeared in birds and insects until we built airplanes. The fact that consciousness has only been observed in biological systems doesn't establish that only biological systems can have consciousness. It establishes that biological systems are the only ones we've observed having consciousness *so far*.
And there really is no justification for the assumption that any functionally equivalent computation in any medium will inherit the same property.
Agreed. There's no justification for assuming it will. But equally, there's no justification for assuming it won't. You're treating
carbon chauvinism as the default position that doesn't require argument. But it's a positive claim: "consciousness requires biological matter specifically." That claim needs support.
I'm saying that consciousness and agency are in the former group. And until you can actually show that consciousness is nothing but some informational pattern, this substrate "agnosticism" is merely an ungrounded assumption.
But you haven't shown that consciousness is in the digestion group either. You've asserted it. Your argument is: "consciousness appears in biological systems, therefore it probably requires biological systems." But that's induction from a sample that doesn't include any sophisticated artificial systems.
You claim my agnosticism is ungrounded. But your certainty is equally ungrounded. Neither of us has a theory of consciousness that explains why it arises in brains. You're betting that the explanation will turn out to involve biological specificity. I'm betting it will turn out to involve functional organization. Both are bets.
From this I conclude that it's speculative at best to claim that a circuit board system could also have it.
I agree it's speculative. I never claimed certainty. What I reject is your claim that it's incoherent or contradictory.
You titled your position "AGI is an incoherent notion." That's much stronger than "AGI is speculative" or "AGI is currently unsupported." Incoherence means logical impossibility, like a happy Null or a square circle. You haven't demonstrated logical impossibility. You've demonstrated that we lack a complete theory and that your intuitions favor biological necessity.
I don't see how your claim is modest. From ignorance (we don't know how) you assert a very strong universal (therefore any implementation of the right functional organization could do it).
I'm not asserting that universal. I'm asserting agnosticism about it. The strong universal would be: "Any functionally equivalent system definitely has consciousness." I do tend to assume that that will probably the truth, but universally, I'm not claiming that. I'm claiming: "We don't know whether functionally equivalent systems would have consciousness, and we can't rule it out."
You're the one making the strong universal claim: "No non-biological system could possibly have consciousness." That's the claim I'm asking you to defend.
The ontological difference between a living teleological system whose organization only exists because it maintains itself and an artifact whose structure is imposed and maintained from the outside, to satisfy some external designer's criteria, is not trivial.
Agreed, it's not, but you haven't shown it's exhaustive.
Consider a third category: an artificial system that maintains itself, reproduces itself, and whose "goals" emerged through selection rather than explicit design. This doesn't fit cleanly into either "living teleological system" or "artifact with external designer's criteria." Your dichotomy assumes these are the only two options.
And so far I haven't seen anyone make a serious argument and defend the claims that consciousness/agency is identical to some formal/computational structure such that they can be reliably instantiated in any physically reasonable substrate.
This is true, and I won't pretend otherwise. But the same is true in reverse: no one has made a serious argument that consciousness requires biological matter specifically. Both positions are under-argued because we don't understand consciousness.
The difference is that I acknowledge this symmetry, while you seem to think your position is the safe default that doesn't require argument. It isn't. "Consciousness requires carbon" is a positive claim about the physics of consciousness. It needs support just as much as "consciousness is substrate-independent."
Let me ask you, If we built my hypothetical machine, and it has goals that emerged through artificial evolution (e.g. genetic algorithms) it models itself and it's environment and it also exhibits flexible, context sensitive behavior in purpose of self-maintenance, in your opinion, would this machine have genuine teleology? If not, what's still missing? And how do you know a bacterium has that missing thing?