To point out something even more obvious to where people will overlook it, of course "language models" will resemble humans because it's based on the language models of humans. Hypothetically create an AI based on cat communication methods and see if it seems human in any way.
Aella's also making another logical error here (ironic for a rationalist to make so many) that's also pretty obvious. She's expecting machines to be different when she has no evidence they should be different. Especially in regards to machines created by humans in which the interaction with humans is of prime concern and value to the people creating them to where this is almost required as an inherent feature. I had to make the above cat AI hypothetical because there's a natural problem with a human creating a machine that cannot be understood by humans. It's nice to think we understand cats but ultimately we do not because no human has experienced being a cat to know how correct the understanding is to a cat. (Especially not otherkin.)
You'd think a rationalist (and a libertarian one at that) would be much more skeptical of basic premises and presumptions, especially those that are common narratives but may not have any underlying foundation or direct evidence to support them. Instead Aella almost seems like another dumb chick on Twitter sperging her ignorance everywhere. This obviously can't be the case but I can see why some might be misled and therefore belittle her significant scientific achievements.
You're spot on, and to add a little bit, language models like ChatGPT are even more uncanny valley than a lot of other machine learning applications. I don't do any research, and I haven't worked on language models, but I have enough background to read academic papers and I do for work sometimes. So, I know enough to sperg about how dumb her statement is.
Think about an image recognition model. That makes a lot more intuitive sense for a computer to handle. When a dumb gay captcha asks you to click off a bunch of crosswalks, it's not that too tough to describe how a computer program checks that you clicked the crosswalks: it does some normalization/preprocessing to the image, and when that image is reasonably clear, they know that a crosswalk will have a very specific shape and patterns in its coloration, stuff like that.
Even for more complicated image recognition tasks, like recognizing if a cat or a dog is in a picture, it still makes sense. We know that a cat's eyes should be about so big in relation to its face, it should have two ears, and those should have a certain shape and be in a certain place relative to the eyes and the nose and so on. It's pretty easy to understand the broad strokes of a cat-recognizing program would be made. It's way tougher to make that program, but you could explain how it works to a 5 year old.
All of that makes sense because pictures have a really straightforward way to be stored/represented in computers; just a big matrix of RGB (or grayscale or whatever, but who gives a shit about that) numbers. So when they teach neural networks and deep learning, there's simple enough ways to show which part of the neural net recognizes a cat's ears, which part recognizes the tail and whiskers, and so on. For commercial applications, explainability is often pretty important. Computers work with numbers, at the most primitive level for a computer, everything's all 0s and 1s, high voltage or low/no voltage. All the stuff in those last couple paragraphs applies to video and audio too too, optics and acoustics are really old and well understood fields of physics AFAIK.
But for language, there isn't as straightforward of a way to present words and sentences to a computer as there is for images. You may want similar words/phrases to be numerically close to each other, but when it sees "dog", it needs to understand the context. And what about sarcasm, or a dozen other different things? Humans are pretty unique in having language skills, and there's a ton of work that's been done in making computer language models more sophisticated, and that work is legitimately impressive. But again, it all boils down to numbers. So, what ChatGPT is doing is way different from how people develop and understand language.
I mean, ChatGPT is genuinely neat and fun to play around with. The progress that's been made over the iterations of the GPT model is really impressive. But claiming it works similarly to the human brain is just so fucking retarded.