This is a retarded argument on the level of saying people can't fly because if God wanted them to he'd have given us wings. It's pure cope. Fact is there is literally nothing stopping someone from making an AI that in every aspect is a human. You would not be able to tell it wasn't a human, because it would be that good at everything it does. There is little functional difference between replicating the outcome of a human's actions and actually simulating the soul of our thought process. Just a sophisticated version of how AI art can be so good we can't tell the difference between it and human art. This isn't materialism at all, it's just reflecting the fact the vast majority of our thought processes can be mathematically defined.
At this rate, AGI is going to happen. Doesn't matter what form it is. And we are fucked every single way once it emerges.
Explain to me the Chinese Room thought experiment in your own words, if it's so retarded. Explain how you can build an artificial
dasein without real comprehension at any level—without any central, foundational ground of awareness and understanding from which to make sense of any particular thing you try to teach it.
You can't "replicate the outcome of a human's actions", because humans behave on the basis of deductions, inductions, and inferences from concepts that they understand—that is, mental divisions that they make out of raw experience. No machine can organically make those divisions, in part because they don't have the raw material of experience.
In a human being, it goes something like this:
- Sensory input through nerves;
- Synthesization of raw experience;
- Organization of that raw experience into a comprehensible world-wiew through various ideological and perceptual heuristics.
AI tries to skip step two, without which step three is impossible. Computers are machines that manipulate human-defined symbols; they only ever piggyback off of our own world-experience. You can't go straight from sensory input into heuristics without having a "world" that you're in to make sense of in the first place.
You are making uneducated philosophical assumptions about the materiality of the mind (as well as about what computers can do) that many people in this industry do not, because it's something they had to seriously contend with. They teach this to college freshmen.
You can't build an LLM that even realistically approaches the mind of a toddler, let alone for every possible real-world situation you could throw that toddler into. It's all dominoes.