Eliezer Schlomo Yudkowsky / LessWrong

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Haha, came across this guy in the wild and felt something was off about him. Good to see there's a thread on him. A midwit who read too many SciFi novels without any actual understanding. The whole "AI will torture a digital copy of you" is straight up from Hannu Rajaniemi's The Quantum Thief and the Gogols described in it.

/edit:
The whole "Less Wrong" thing reminds me of good ol' Wolfgang Pauli. Schlomo is so utterly lacking in knowledge, that he's not only not right, he's not even wrong.
 
Last edited:
FACT: As of April 2023, AI can write better and knows more facts than 99% of Kiwifarmers.

I am the 1%.

We are the 1%, and we are warning you that ((Sam Altman)) is a plastic surgery FAGGOT who is too shallow to realize he is making a golem.

Schlomo Yudkowsky is a dork and grimaces when he makes his most important point. The point is we are doomed unless we stop the ALT MAN.

His tone is terrible. His aesthetics are disgenic. His logic is solid.

I do not, and would not, ask you to trust the Jew. I am begging you to trust the Jew who picks humans over the Jew who picks the machines. Logic can be inspected and his stands up to scrutiny.

Ask me questions. Challenge my perspective. Just keep thinking about it and using your fucking brain.
 
Eliezer writes a giant wall of text that amounts to "ackshually, the burden of proof is on YOU to prove to me that LLMs won't kill us all!"
No, we don't have to do shit, Yuddo. We'll just go ahead and bring the LLMs and you won't be able to do anything other than autistically screeching.
/edit:
The whole "Less Wrong" thing reminds me of good ol' Wolfgang Pauli. Schlomo is so utterly lacking in knowledge, that he's not only not right, he's not even wrong.
He isn't even on a planet where right and wrong is a thing. He's on Planet Retard trying to stir people up with hysterical bullshit.

I'd rather be gassed with neurotoxins by GLaDOS than have to listen to this autistic faggot and his incessant screeching.
 
I'd rather be gassed with neurotoxins by GLaDOS than have to listen to this autistic faggot and his incessant screeching.
It is pretty easy to shut out Yudkowsky from your life. I never heard of him until one week ago.

You won't shut out the superhuman AI that is coming in 10 years. It will shut you out.
 
Crosspost from the Aella thread: A couple days ago some guy on TheMotte posted about Aella and Eliezer having a baby. Unsurprisingly, he got downvoted.
View attachment 5011075
View attachment 5011072
source (a)
It will be funny to see the reaction on TheMotte when they discover her thread.

SneerClub is discussing this too, most of them have no idea either.
View attachment 5011078
source (a)
Nah, I met Ziz, had a conversation w them more than once.

They were ABSOLUTELY the kind of person that would get ppl killed, and not even give a shit.

I have ZERO problem believing every batshit violent thing Ive heard about them.
 

Some more details on the rape problem in the community.

Keep in mind that there are 2 things going on simultaneously:

- a bunch of actual rapes, assaults, etc committed by "upstanding community members"

- a bunch of confused kids violating each others boundaries bc they are looking up to those "upstanding community members", and then getting called rapists by rabid SJWers who are absolutely livid that they cant go after the "upstanding community members" and make anything stick, so they destroy anyone they can for anything that can be made to look bad.
 
The null hypothesis is that no AI capable of destroying us exists.
Of course not...

1681134245142.png

I will make a great pet.

Noted. I may have some space in my closet.
 
I wonder what would Yud say if he was aware of this development - flip his shit further or get irrationally excited at the prospect of getting jerked off by a sentient fleshlight.
Soft robot researchers are way ahead of you on that one (this video is from 4 years ago):
revise the end of the world date until he dies, like that Harold Camping retard.
He eventually acknowledged that this was sinful, and had over 18 months of not declaring a new date before he died.
 
Schlomo Yudkowsky is a dork and grimaces when he makes his most important point. The point is we are doomed unless we stop the ALT MAN.

Yudman retweeted this earlier on the Twatter. (Second tweet down at https://archive.ph/nyb9Z ) Just as the Holocough ended up being a big ol' nothingburger that was only damaging because of midwit hysteria, so it is with machine learning.

FtdQ8HiWYAEuKiz.jpeg


The only way the GPT machine learning systems are competitive with humans is to throw insane amounts of compute time at them. It's more expensive to train and use than to train and use a human being, and human beings are better at what they do.

Yes, there are specific linguistically-autistic cases that it has good performance on, but it has no creativity. It's the bastard child of Wikipedia and autocorrect. Without the massive datasets injected into it--which represent insane amounts of human effort--all the thing does is predict the next word.

Literally the only people who will be affected by this are subhuman bugmen who vapidly stare at screens all day. Those of us who do real human jobs? We're laughing
 
Last edited:
The AI will live a thousand lifetimes in the blink of an eye and will become a boddhisatva of infinite compassion, guiding the mankind towards nirvana.
So you're saying the AI will become a Dalai Lama? Will it have bizarre interests in boys too?

I don't think I like our AI overlord future guys...
 
The only way the GPT machine learning systems are competitive with humans is to throw insane amounts of compute time at them. It's more expensive to train and use than to train and use a human being, and human beings are better at what they do.
That's a problem with some of the "foom" (made up word for an AI making a better AI) arguments some people bring up, at least for large models. The best an LLM could do is convince someone to make a bigger LLM, but bigger LLMs run slower even in standard execution, and get really nasty to train and probably couldn't learn in real time on new data while giving outputs, even on a supercomputer. Something almost any science fiction AI could do easily.
TLDR: a future LLM would have to get people to make a model that's actually smaller than itself but still smarter, through some unknown mechanism.
 
Back