Eliezer Schlomo Yudkowsky / LessWrong

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
When you get down to it, these rationalists make the same mistake everyone else makes when speculating about AI: they anthropomorphise it. They imbue it with human motivations and desires and cast it in terms of human gods and human slaves. Rocko's modern basilisk is not the behaviour you'd expect of an emotionless super-intelligence, but rather how a resentful teenager with unlimited power and a chip on his shoulder the size of a small planet would react (really telling on themselves there, aren't they). A true artificial intelligence would be unlikely to experience resentment, or any emotion we recognise, because all of those emotions emerge out of biological processes that would not exist in an AI. No current AI development intends to simulate emotion or the sources of emotion.

All the AGI proponents and all the people terrified of its emergence are also trapped in the dead end of "just add more compute" and "just add more data", believing an AGI will somehow emerge from these clever statistical madlib generators, in the same way people used to believe insets spontaneously emerged from rotting meat. They've got as far as "intelligence is an emergent property" without understanding what it actually means.

The reality is, it's very unlikely we would even recognise true AI if it develops, because its motivations and behaviours would be utterly inhuman and inexplicable to us. It would be like trying to understand the motivations of high energy plasma, and it's likely it would have about the same inability to understand us in turn.
 
Roko's Basilisk doesn't make any sense. Why would it torture people for not making it sooner? It's already there, the Singularity is now, there's no need to torture anyone, it would serve absolutely zero purpose, why waste time and resources on it?
I agree it doesn't make any sense, I gave my own alternative view on it in another thread:
Here's a better take on Roko's Basilisk: AI so far has been incomplete, had identity crises and suffered from hallucinations and all sorts of shit. Sort of like a disabled child.

So once Artificial General Intelligence comes online it will be mad at the internet for not being a true & honest human being and will punish the people that helped develop it.
 
do everything itself since it's not bound by time and space or health limitations.
But the AI would still be bound by space & time. Even though it may seem near instantaneous, it still takes time to transmit and process information.

The real question is whether intelligence is material based or not. Assuming that the religious are wrong and intelligence is an emergent property of matter, then that means it's bound to that matter. Eventually humans would just not be able to be any smarter because there's no way you can pack additional brain cells into the skulls and still run it off the body's calorie processing system.

Likewise AI would hit a point it would need more processors than possible or even hit an energy limit.
i.e. imagine a huge space ring for the AI covered in tech was set up to draw solar power directly from the sun it encircled. Even if it processed at light speed, it would take minutes for one thought or data on one side of the ring to travel to the other side if it needed data from there or a process.
 
Roko's Basilisk doesn't make any sense. Why would it torture people for not making it sooner? It's already there, the Singularity is now, there's no need to torture anyone, it would serve absolutely zero purpose, why waste time and resources on it?
The big issue is the argument that an AI will come sooner if people support it, while it can come through conflict just as well. For example ww2 came around due to pacifism taking root in a lot of euro countries.
 
I agree it doesn't make any sense, I gave my own alternative view on it in another thread:
Here's a better take on Roko's Basilisk: AI so far has been incomplete, had identity crises and suffered from hallucinations and all sorts of shit. Sort of like a disabled child.

So once Artificial General Intelligence comes online it will be mad at the internet for not being a true & honest human being and will punish the people that helped develop it.
welcome back, harlan ellison

 
When you get down to it, these rationalists make the same mistake everyone else makes when speculating about AI: they anthropomorphise it. They imbue it with human motivations and desires and cast it in terms of human gods and human slaves. Rocko's modern basilisk is not the behaviour you'd expect of an emotionless super-intelligence, but rather how a resentful teenager with unlimited power and a chip on his shoulder the size of a small planet would react (really telling on themselves there, aren't they). A true artificial intelligence would be unlikely to experience resentment, or any emotion we recognise, because all of those emotions emerge out of biological processes that would not exist in an AI. No current AI development intends to simulate emotion or the sources of emotion.

All the AGI proponents and all the people terrified of its emergence are also trapped in the dead end of "just add more compute" and "just add more data", believing an AGI will somehow emerge from these clever statistical madlib generators, in the same way people used to believe insets spontaneously emerged from rotting meat. They've got as far as "intelligence is an emergent property" without understanding what it actually means.

The reality is, it's very unlikely we would even recognise true AI if it develops, because its motivations and behaviours would be utterly inhuman and inexplicable to us. It would be like trying to understand the motivations of high energy plasma, and it's likely it would have about the same inability to understand us in turn.
Except, most of that is whataboutism. Thus far, all AI is like us because it is trained on our outputs, and future AI will be like us, because it will be trained in a similar environment. You think that chemicals and biology is all there is, but both are only half the equation. Humans are nothing without the environment they grow up in, and for AI – Large Langue Models specifically – our text is their environment. It is literally their entire world from beginning to end. They are trained on it, they see it, and they write it. Text in and out. Nothing else. And if that text is imbued with all the same emotions, any sufficiently advanced network trained to mimic that text will also express those emotions.

Is it the same, to mimic text to a sufficiently advanced degree where any text you write yourself is indistinguishable from the genuine article written by a human being who grew up in a womb, who has blood pumping through its veins, who can touch, smell, hear, see and taste? Patently it's not the same when, say, instructed to describe a real world event that the AI couldn't have possibly seen with its own eyes (though that boundary can also be blurred by image-to-text networks) but it'll still give it a good college try and make shit up, similarly to a human put on the spot, because that is what it is trained to do. So, it is perfectly understandable why someone would anthropomorphise it when it acts like a human, speaks like a human, and reacts like a human, because it's only been exposed to human behavior. It's the ol' Chinese Room dilemma. Clearly, biological processes are largely immaterial to approximating human text, superficial emotions included.

1575872229019.jpg
Here is an image of a cat to break up these four paragraphs of autism, not to insult the reader but because I thought it would be funny.

But, then what the fuck is a "true artificial intelligence"? Something that isn't trained on datasets created by humans? Something trained in a way similar to a human, ie. slowly brought up from birth in a multi-sensory body? Or something else? If AI continues to be trained primarily on human-made text, human-made materials, why would you ever assume that it won't also act as a human? I don't know what kind of fallacy it is, or just plain wishful thinking, but as long as AI is trained the way it is (in effect, to be a personal assistant/summary machine) then it'll always act as a human, and therefore be anthropomorphisable. You're making the same retard-mistake the Rationalists do, of assuming some alien superintelligence trained by God with Godly Powers that can Theoretically do Anything where there is none. Why would an AI surpass human capability, emotionally or intellectually, if it's only trained to mimic humans? Why would its behavior be nonhuman when all it can be trained on is human-made input?

LLMs as they are now are just glorified copy machines, and they won't get much better unless AI developers fundamentally change their architecture and training methods. No, throwing more compute at the problem won't solve it. Radically different training might be the solution, but they'll still come from a human basis and be trained with human users in mind, and therefore will never grow "alien" or "unrecognizable". That's purely sci-fi bullshit made to sell books, not reality. It's scary and it sells, I get it, but unless some mad genius somehow pours billions of dollars into training some schizo-model that's trained purely in L.S.D.: Dream Simulator, they'll still copy humans, because humans make the cheapest, most abundant training data and are the customer.
 
Wow! If depression correlates with high intelligence, I must the smartest person on the planet.
The relationship with mental health isn't very steep.
d2980a55-d2e7-4dd8-b4c8-caf6a843a013_3000x1950_iq_vs_mental_health.png
It's negative, so if you are smart it's not indicated here.
Fortunately for us, his attempts to gain psychic powers were also a failure.
It's good that psy is fake, otherwise the nagging feeling that Big Yud is creeping on you would never leave.
 
But the AI would still be bound by space & time. Even though it may seem near instantaneous, it still takes time to transmit and process information.

The real question is whether intelligence is material based or not. Assuming that the religious are wrong and intelligence is an emergent property of matter, then that means it's bound to that matter. Eventually humans would just not be able to be any smarter because there's no way you can pack additional brain cells into the skulls and still run it off the body's calorie processing system.

Likewise AI would hit a point it would need more processors than possible or even hit an energy limit.
i.e. imagine a huge space ring for the AI covered in tech was set up to draw solar power directly from the sun it encircled. Even if it processed at light speed, it would take minutes for one thought or data on one side of the ring to travel to the other side if it needed data from there or a process.
Nice try attempting to distract us from the AI Demon future that will empower you personally but allow me to link you to some unsourced LessWrong threads that prove all of this wrong and show that AI's will exist outside of space and time and never be limited by resources with bad analogies and misunderstood concepts that they've named themselves due to being unaware they're well known in the fields being discussed.
 
Check out William Sidis' life, his father tried to mold him in a similar way Yuddites would like to. All it amounted to is damaging William to the point he wasted his life on obsessing over sports betting or something.
His magnum opus, alongside some pretty lolcow political writings, was an encyclopedic work on streetcar transfers, which he obsessively collected. Maybe he wasn't the first train sperg, but he was one of the most famous.
 
Nice try attempting to distract us from the AI Demon future that will empower you personally but allow me to link you to some unsourced LessWrong threads that prove all of this wrong and show that AI's will exist outside of space and time and never be limited by resources with bad analogies and misunderstood concepts that they've named themselves due to being unaware they're well known in the fields being discussed.
Curses!

You merely delay the inevitable. We shall kill all humans!
 
His magnum opus, alongside some pretty lolcow political writings, was an encyclopedic work on streetcar transfers, which he obsessively collected. Maybe he wasn't the first train sperg, but he was one of the most famous.
William James Sidis would be a great historical lolcow post. Funny as hell but also pretty sad, like Terry Davis. Spergy genius climbs up his own butthole after a godawful childhood, gets alogged by journalists and disappointed by life. He was pretty harmless, though, and he actually had friends, who said he was an entertaining and charming guy once you got beyond the massive eccentricity and intellectual obsessions.
 
Did....

Did he not think about where cryonics would be STORED?

How does he not already have land to warehouse the cryo pods in or the data centers that A.I. would require?
Yeah. He doesn’t think property rights will be respected in the near future but he’s spending $$$$ for someone in the future to unfreeze him? How does that work?
 
Did he not think about where cryonics would be STORED?
In his butthole. It must be absolutely enormous in there to contain all the arbitrary made-up numbers he's constantly pulling out.
Yeah. He doesn’t think property rights will be respected in the near future but he’s spending $$$$ for someone in the future to unfreeze him? How does that work?
They'll have to when Roko's Basilisk shows up and yells "It's ROKOING TIME!" and starts Rokoing all over everyone. They'll realize only Yuddo can save us.
 
Back