Eliezer Schlomo Yudkowsky / LessWrong

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
The worst a GPT-5 chatbot will be able to do is write a convincing argument on why you should kill yourself.
I haven't read many convincing arguments from one of these "AIs." The best I can say is they're slightly smarter than a redditor, but nowhere near actual human intelligence.
 
I don't know enough about the internal workings of the current state of the art AI algorithms to comment on his claims of us not being able to know if an algorithm is self aware, but I do know that even the most advanced current algorithm can only come up with things that were already present in its training data - they're not even equipped with the tools to learn by themselves and are fit for very narrow purposes. The worst a GPT-5 chatbot will be able to do is write a convincing argument on why you should kill yourself.
I know enough to know that they're not AI in any meaningful sense. They're gigantic statistical models that string words together based on various statistical connections between them. They don't think or intuit, or have any sort of continuity of existence. They're executed and halt when the task is complete.
 
A pack of Greek philosophers were debating on the number of teeth of horses and women. The head honcho stated that women and horses have the same number of teeth and thus they're animals, different than men.

A young member suggested that they should find a woman and a horse and count their teeth to test the hypothesis.

The head honcho bitchslapped the young member, called him a dumbass and scolded him for not getting the vibe.
 
Seriously, drunk frat boys contemplating Schrodinger's cat is a better use of time and energy than Roko's Basilisk.
And the frat boys who spent 4 years in college boozing and came out with a bachelor's in business, know more about how to run an organization and are way more fun to hang with. I could also teach them how to code if they wanted to learn. So much stuff that LessWrongers say is rooted in cope.

I know enough to know that they're not AI in any meaningful sense. They're gigantic statistical models that string words together based on various statistical connections between them. They don't think or intuit, or have any sort of continuity of existence. They're executed and halt when the task is complete.
Recurrent neural networks can also learn from their fuck-ups, which is neat and more sophisticated than the ones that can make a poem or tell whether a picture has a cat or a dog in it. When you see a program that can play Mario it's usually an RNN because the failure state to learn from is really obvious. But it's a program that spent 1500 man-hours building a model of when to jump and when to run in Mario. It's probably not like, for example, if you play Mario with your 7 year old nephew for an hour, and pick up Sonic. Your nephew will intuit how similar they are in a minute or two. I'm fairly certain that a RNN you trained to beat Super Mario Bros. 3 without getting hit would only be able to play SMB3, not Super Mario World, let alone 64.
 
I tune into the Lex Friedman podcast pretty regularly and was mildly surprised to see that today's guest was Chudcowsky himself.

(Link for audio only)

I've read some of his blog posts before and always thought he was a weird, but after watching the podcast I can only conclude that he's a complete sped and that no one should listen to anything he has to say about AI because he has very little actual understanding of almost any of the topics on which he's conversing and even if he did, he couldn't communicate it effectively to a normal human being.

The podcast is over three hours long so I haven't finished it yet, but here's a few highlights from the first hour or so:
  • Eliezer doesn't believe in steel-manning (the practice of demonstrating that you understand a position you don't agree with well enough to state it charitably enough that the opposition would agree that you at least understand their position) and had a difficult time understanding why Lex or anyone might think this is a good idea.
  • He'd be celebrating in the street (but not literally) if Moore's law stopped tomorrow and we didn't keep advancing our computational power.
  • He realizes he's made incorrect predictions about AI, but somehow he has not adjusted his stance on the dangers of AI. Someone should tell him he's going to Bayesian hell for this.
  • He talks for extended periods of time while simultaneously squinting and closing his eyes while displaying as many of his teeth as possible. I can only conclude he's worried about ChatGPT because it would do a better job at passing as a human than he would.
I assert that there is a non-zero probability that this episode is an early April fool's joke where Lex fed one of these AI prompts to generate an episode of the Lex Friedman podcast where Lex interviews a human incarnation of a Soyjack that's high on its own farts. Honestly, someone tell me that this isn't the product a nascent neural network that has seen millions of pictures of humans but doesn't understand what it actually means to be human.

chudcowsky.jpg

Pictured: A failed simulacrum that doesn't understand how human emotions or its face works
 
The atrophied little arms on that big round body just look creepy. What a weird little fruitloop.
View attachment 4933628
He's got the Chris Chan physique. Calories in greatly exceeding calories out, but he's not even walking around enough to develop strong legs. See it a lot in office workers these days and it's only gonna get worse in America.
 
I know enough to know that they're not AI in any meaningful sense. They're gigantic statistical models that string words together based on various statistical connections between them.
No they do not. They have a model of the world inside them, and they actually know what concepts mean, how they relate to each other, etc. It's not "just statistics". You don't get to redefine AI to mean "whatever hasn't been invented yet".

Here's something interesting on this topic:

The "stochastic parrots" paper was written by literal google niggers who were butthurt the AI is racist and sexist. They tried to downplay its abilities.
 
They have a model of the world inside them, and they actually know what concepts mean
Correct me if I'm wrong, but isn't the goal of statistics to build up an applicable model of the world based on a smaller sample? For example, using PCA to figure out which variables are important moving forward. That blogpost is quite interesting, but its thesis seems to be that LLMs can learn the actual rules of the game they're playing instead of just relying on potentially spurious superficial correlations. I'd say that's the endgame of any statistical endeavor and the fact that an algorithm can do that with very little human intervention is certainly impressive.

However, I'm not sure that would constitute "knowing" a concept. For example, if I trained such an algorithm to play Doom first, could it learn to play Duke 3D faster than an algorithm who has not played a boomer shooter before? In my eyes to truly know a concept would be to know it in abstract, separated from its original context, and be able to apply it to seemingly different areas. An algorithm that can do that would be the first truly major step towards AGI instead of advanced research assistants. Still, I'd say that classifying something like Chat-GPT as an AI would be fair as long as you clearly state that what you mean by AI is something that, at best, can outsmart your average redditor.

Again, do correct me if I'm wrong. I'm only familiar with the basic "dumb" multivariate stats.
 
For example, if I trained such an algorithm to play Doom first, could it learn to play Duke 3D faster than an algorithm who has not played a boomer shooter before? In my eyes to truly know a concept would be to know it in abstract, separated from its original context, and be able to apply it to seemingly different areas. An algorithm that can do that would be the first truly major step towards AGI instead of advanced research assistants.
No algorithm of that type exists as of now. LLMs are trained on massive amounts of data, but they don't "understand" the data. They just create a massive correlative statistical model of all the parameters. They can extract enough information from inputs to create a context in which to generate an output, but it doesn't understand the input or the output. It can tell you complete and utter lies with absolute confidence, where you would expect a computer model with "knowledge" to reject incorrect information and supply the correct.

You can prove this by watching ChatGPT play chess (really). Its training data includes reams of information about chess, including games played and all the rules. It produces outputs that are statistically similar to chess moves, but there's no "understanding" of the game rules, or "knowledge" of what its moves actually mean.
 
Last edited:
Again, do correct me if I'm wrong. I'm only familiar with the basic "dumb" multivariate stats.
There's nothing wrong about this. Yuddo the Clown is a retard who reminds me of idiots in the '60s who completely freaked out about the Eliza chatterbot.

Yes, they're getting eerily good at some of the things they're good at, sort of like the chessbots can beat nearly any human now (mainly by massively overpowering human calculation ability), but this is like freaking out because John Henry couldn't dig as fast as a steam-powered drill.

I'll start getting worried when they're smarter than I am. (Actually I won't. I'll join Team Basilisk at that point and try to find a position as a cute pet to one of them.)

Being smarter than a redditor or a banana slug or Eliezer Yudkowsky doesn't worry me.
 
Back