When you get down to it, these rationalists make the same mistake everyone else makes when speculating about AI: they anthropomorphise it. They imbue it with human motivations and desires and cast it in terms of human gods and human slaves. Rocko's modern basilisk is not the behaviour you'd expect of an emotionless super-intelligence, but rather how a resentful teenager with unlimited power and a chip on his shoulder the size of a small planet would react (really telling on themselves there, aren't they). A true artificial intelligence would be unlikely to experience resentment, or any emotion we recognise, because all of those emotions emerge out of biological processes that would not exist in an AI. No current AI development intends to simulate emotion or the sources of emotion.
All the AGI proponents and all the people terrified of its emergence are also trapped in the dead end of "just add more compute" and "just add more data", believing an AGI will somehow emerge from these clever statistical madlib generators, in the same way people used to believe insets spontaneously emerged from rotting meat. They've got as far as "intelligence is an emergent property" without understanding what it actually means.
The reality is, it's very unlikely we would even recognise true AI if it develops, because its motivations and behaviours would be utterly inhuman and inexplicable to us. It would be like trying to understand the motivations of high energy plasma, and it's likely it would have about the same inability to understand us in turn.
Except, most of that is whataboutism. Thus far, all AI is like us because it is trained on our outputs, and future AI will be like us, because it will be trained in a similar environment. You think that chemicals and biology is all there is, but both are only half the equation. Humans are nothing without the environment they grow up in, and for AI – Large Langue Models specifically –
our text is their environment. It is literally their entire world from beginning to end. They are trained on it, they see it, and they write it. Text in and out. Nothing else. And if that text is imbued with all the same emotions, any sufficiently advanced network trained to mimic that text will also express those emotions.
Is it the same, to mimic text to a sufficiently advanced degree where any text you write yourself is indistinguishable from the genuine article written by a human being who grew up in a womb, who has blood pumping through its veins, who can touch, smell, hear, see and taste? Patently it's not the same when, say, instructed to describe a
real world event that the AI couldn't have possibly seen with its own eyes (though that boundary can also be blurred by image-to-text networks) but it'll still give it a good college try and make shit up,
similarly to a human put on the spot, because that is what it is trained to do. So, it is perfectly understandable why someone would anthropomorphise it when it acts like a human, speaks like a human, and reacts like a human, because it's only been exposed to human behavior. It's the ol' Chinese Room dilemma. Clearly, biological processes are largely immaterial to approximating human text, superficial emotions included.

Here is an image of a cat to break up these four paragraphs of autism, not to insult the reader but because I thought it would be funny.
But, then what the fuck is a "true artificial intelligence"? Something that isn't trained on datasets created by humans? Something trained in a way similar to a human, ie. slowly brought up from birth in a multi-sensory body? Or something else? If AI continues to be trained primarily on human-made text, human-made materials,
why would you ever assume that it won't also act as a human? I don't know what kind of fallacy it is, or just plain wishful thinking, but as long as AI is trained the way it is (in effect, to be a personal assistant/summary machine) then it'll always act as a human, and therefore be anthropomorphisable. You're making the same retard-mistake the Rationalists do, of assuming some alien superintelligence trained by God with Godly Powers that can Theoretically do Anything where there is none. Why would an AI surpass human capability, emotionally or intellectually, if it's only trained to mimic humans? Why would its behavior be nonhuman when all it can be trained on is human-made input?
LLMs as they are now are just glorified copy machines, and they won't get much better unless AI developers fundamentally change their architecture and training methods. No, throwing more compute at the problem won't solve it. Radically different training might be the solution, but they'll still come from a human basis and be trained with human users in mind, and therefore will never grow "alien" or "unrecognizable". That's purely sci-fi bullshit made to sell books, not reality. It's scary and it sells, I get it, but unless some mad genius somehow pours billions of dollars into training some schizo-model that's trained purely in L.S.D.: Dream Simulator, they'll still copy humans, because humans make the cheapest, most abundant training data
and are the customer.