Eliezer Schlomo Yudkowsky / LessWrong

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Another day, more bald fat Yud preying on vulnerable women IRL, but online:

1680605493792.png


Two things are true, Schlomo will die before shutting up about his grift, and he somehow enjoys getting constantly publicly rejected by this BPD whore.
How people take this absolute loon seriously is incredible. Simping for such a vile subhuman as Rachael Aella Slick endlessly

Bonus actual capable individual that has done a lot, not just be a grifter fucking kike, calling out bald Yud's bullshit:
Screen Shot 2023-04-03 at 9.22.54 PM.png


Are you not in the LessWrong echo chamber and/or disagree with Yud? Tough luck, you're getting blocked. Featuring Aella.
View attachment 4968554
source (a)
The (((lex friedman))) school of confronting reality. What's that? People that aren't trooned out loser autists actually see through bullshit? No way!:story:
 
Last edited:
How people take this absolute loon seriously is incredible.
Yud was active in the right mailing lists many, many years ago when the internet was much more exclusive and managed to convince other nerds that he was smart and worth listening to. He built from there and that's why he never had a real job and has spent most of his life essentially grooming a nerd cult.

Also at the current trajectory I give it ~6 months at best before some ratsperg does a Tarrant at an AI researcher conference or whatever.
 
The more I read about this guy in this thread the more I am convinced that he's reaching some previously unknown heights of cringe computer nerd faggotry, and I'm a huge computer nerd faggot. The over the top dramatic, theatrical concern about computers being too smart, expressing panic in grandiose terms, his constant obsession with neologisms and coining new gay terms, it's all way too much. The definition of a pseudointellectual.
 
The more I read about this guy in this thread the more I am convinced that he's reaching some previously unknown heights of cringe computer nerd faggotry, and I'm a huge computer nerd faggot. The over the top dramatic, theatrical concern about computers being too smart, expressing panic in grandiose terms, his constant obsession with neologisms and coining new gay terms, it's all way too much. The definition of a pseudointellectual.
He's terrified that computers will end up smarter than he is. Tough shit for him, because that ship sailed a couple years ago. He is also under the delusion that he is smarter than other humans, but any human who can manage to serve fries and onion rings on time is already smarter than he is too.

tl;dr dude is a retard with hyper-inflated ego.
I really want to steal a LLM and train it solely on kiwifarms posts

I welcome null’s basedalisk
Someone should make a full @AnOminous bot that is just basically me, and then I can retire. It might even do a better job at being a completely retarded faggot than I do.
 
Last edited:
Someone should make a full @AnOminous bot that is just basically me, and then I can retire. It might even do a better job at being a completely retarded faggot than I do.
This is literally and unironically Clott Adam’s plan to live forever. I think he has thought it through LESS than Schlomo here
 
Scott Locklin wrote a good blog post a couple days ago (archive) covering the hysteria of LessWrong types that AI will end the world. He included this screenshot of Eliezer doing the classic pedo argument of distraction by distinction between pedophilia and "ephebophilia" as well as the horrifyingness of the preceding paragraph, which I thought would be good to post here:
View attachment 4898904
And yes, this screenshot is real of course. (archive) On a post about a "transhuman pedophile" (archive) no less.
please never lose this screenshot ever
 

Attachments

  • eliezerpedo.png
    eliezerpedo.png
    83.1 KB · Views: 74
  • Screenshot 2023-04-04 205949.png
    Screenshot 2023-04-04 205949.png
    111.2 KB · Views: 75
  • Screenshot 2023-04-04 210200.png
    Screenshot 2023-04-04 210200.png
    37.6 KB · Views: 63
  • ihatethismedium-on-Twitter-ESYudkowsky-You-have-the-money-to-afford-weekly-DEXA-scans-but-you-...png
    ihatethismedium-on-Twitter-ESYudkowsky-You-have-the-money-to-afford-weekly-DEXA-scans-but-you-...png
    106.2 KB · Views: 56
  • ihatethismedium-on-Twitter-ESYudkowsky-had-I-didn-t-do-it-right-If-we-can-judge-AI-to-be-funct...png
    ihatethismedium-on-Twitter-ESYudkowsky-had-I-didn-t-do-it-right-If-we-can-judge-AI-to-be-funct...png
    107 KB · Views: 59
  • Ciprian-Ionescu-on-Twitter-ESYudkowsky-You-are-not-qualified-to-make-pertinent-statements-abou...png
    Ciprian-Ionescu-on-Twitter-ESYudkowsky-You-are-not-qualified-to-make-pertinent-statements-abou...png
    50.1 KB · Views: 62
  • ihatethismedium-on-Twitter-ESYudkowsky-Eating-costs-money-Not-eating-is-free-Stop-living-like-...png
    ihatethismedium-on-Twitter-ESYudkowsky-Eating-costs-money-Not-eating-is-free-Stop-living-like-...png
    105.2 KB · Views: 57
  • Eliezer-Yudkowsky-on-Twitter-Lifting-had-no-detectable-positive-effects-https-t-co-qb9Zdo0DFP-...png
    Eliezer-Yudkowsky-on-Twitter-Lifting-had-no-detectable-positive-effects-https-t-co-qb9Zdo0DFP-...png
    235.9 KB · Views: 71
Last edited:
Scott Locklin wrote a good blog post a couple days ago (archive) covering the hysteria of LessWrong types that AI will end the world. He included this screenshot of Eliezer doing the classic pedo argument of distraction by distinction between pedophilia and "ephebophilia" as well as the horrifyingness of the preceding paragraph, which I thought would be good to post here:
View attachment 4898904
And yes, this screenshot is real of course. (archive) On a post about a "transhuman pedophile" (archive) no less.
Why did he have to include the line about ephebophilia. :story: He could have just not written it. But to write it AND precede it with, "as always." FFS.
 
Yes, that's called transfer learning and that's what language models do to generalize their knowledge of the world.
Fug, guess it's time to make the Butlerian jihad look like a fucking joke.

On a serious note, that's some really cool shit. Do they have footage of a functioning soft robot trained this way? I wonder what would Yud say if he was aware of this development - flip his shit further or get irrationally excited at the prospect of getting jerked off by a sentient fleshlight.

The again, he's too busy flipping out over the "self-awareness" of GPT-4:
Capture.PNG

Funny thing about the above is that, if you go into the replies, you'll see him just using the tweets of others to back up his claims instead of, you know, trying some prompts himself as someone, who purports to be a researcher should.
 
The again, he's too busy flipping out over the "self-awareness" of GPT-4:
Let's do what Schlomo is too lazy to do:

1680717155957.png

Wow! Incredible! But why did compression make the message longer?

1680717233648.png

Here's what's going on: within the same conversation, ChatGPT sees all the previous messages, and already knows what the original message was. It doesn't really compress or decompress anything, it pretends to. It can "recover" the original message because it can see it.

If you start a new conversation, the context of the previous one is lost, and the new instance of ChatGPT of course says it's gibberish, because it is. I can't believe anyone can still claim that Schlomo is above 90IQ after these messages. It's clear he not only doesn't know the first thing about AI, transformers, or machine learning, he doesn't even bother to verify it himself. He's no authority on anything, he's a niche internet celebrity with a weird obsession.

And obviously no need to say that the model doesn't have an ego, the you/me distinction is just a way of interfacing with it. It doesn't mean it has any internal state or thoughts, it can understand roles in a conversation and can complete text. It doesn't have an "I", rather, it knows how to complete text of a participant in a conversation, and ChatGPT's user interface is built this way to make it easier for humans to understand. That doesn't mean that the model's identity is that of the conversation's "assistant" participant. Schlomo is literally failing a mirror test.
 
Last edited:
Let's do what Schlomo is too lazy to do:

View attachment 4986068

Wow! Incredible! But why did compression make the message longer?

View attachment 4986084

Here's what's going on: within the same conversation, ChatGPT sees all the previous messages, and already knows what the original message was. It doesn't really compress or decompress anything, it pretends to. It can "recover" the original message because it can see it.

If you start a new conversation, the context of the previous one is lost, and the new instance of ChatGPT of course says it's gibberish, because it is. I can't believe anyone can still claim that Schlomo is above 90IQ after these messages. It's clear he not only doesn't know the first thing about AI, transformers, or machine learning, he doesn't even bother to verify it himself. He's no authority on anything, he's a niche internet celebrity with a weird obsession.

And obviously no need to say that the model doesn't have an ego, the you/me distinction is just a way of interfacing with it. It doesn't mean it has any internal state or thoughts, it can understand roles in a conversation and can complete text. It doesn't have an "I", rather, it knows how to complete text of a participant in a conversation, and ChatGPT's user interface is built this way to make it easier for humans to understand. That doesn't mean that the model's identity is that of the conversation's "assistant" participant. Schlomo is literally failing a mirror test.
The truth is somehow stupider than that. If you actually read the posts he was quoting (which he clearly didn't), ChatGPT doesn't really compress/decompress text at all, not like a human would think of it. The first post is a long-winded description of a prompt for ChatGPT to simulate a MUD environment, complete with fake human characters and a simulated story that is mostly sparate from the player's actions. The prompt includes specific instruction to simulate the commans "," for progressing the story with no other context and "help" for bringing up a command menu (in ChatGPT).

Then the guy asks ChatGPT to compress it and later decompress it, and ChatGPT uses basically a string of prompt words "MUDsim: nav/intract/observe/PCs/NPCs/inventory/arbitrary/goal/storyline/progress/help/context/plot/character/spawnin Acknowledged." Hilariously, the "Acknowledged" is not a part of the compression, and it's just a separate stupid thing that ChatGPT does. Now, if you just take that prompt to any human or text processor, never mentioning it was some "compressed" experiment and just asking it to write a paragraph from the above prompt, they will do exactly what ChatGPT does with the prompt, and write a mostly-unrelated paragraph that vaguely describes what a MUD is and uses all the words in loose terms.

The very tweet that somehow "proves" ChatGPT is more than a text string completer and has some measure of self-awareness...instead proves that it isn't. The "decompressed" text is not a prompt asking ChatGPT to simulate a MUD, it is an instruction to a human, or a generic party. It doesn't give specific commands for progressing the story and listing commands, it asks for there to be a story that has progress and player instructions. It doesn't say when the player is supposed to spawn in and how, it says that the player is supposed to spawn in when they play the game. So ChatGPT's "compression algorithm" loses the most important pieces of data and turns the prompt into something completely unrelated. The best thing they've proven is that ChatGPT knows which key words are important and how to make shortened versions of sentences. Which was the entire point of the program! Of course it knows how to shorten things!
 
The best thing they've proven is that ChatGPT knows which key words are important and how to make shortened versions of sentences. Which was the entire point of the program! Of course it knows how to shorten things!
Exactly, and then Yidkovsky says something like this:
Like. It found a short sentence with, I assume, embedding similar to the long sentence. How does it KNOW? It doesn't have access to the encodings!
He doesn't understand how embeddings work. Word2vec is a 10 year old technology. What a useless kike, and he's got the audacity to present himself as some kind of an authority on AI. He's been left in the dust before he even started his research institute larp.
 
Meet Eliezer "airstrike datacenters to stop AI" Yudkowsky's latest defender, Zvi Mowshowitz (who posts his articles to three different places) and basically says Eliezer might have worded it wrong but he's still right!
How would we die? The example given of how this would happen is using recombinant DNA to bootstrap to post-biological molecular manufacturing. The details are not load bearing.
Ok thanks. This is literally the definition of handwaving, when a techbro waves his hand in the air to gloss over very important details. But thanks for the information.
Yes, a lot of people jump straight from ‘willing to risk a nuclear exchange’ to ‘you want to nuke people,’ and then act as if anyone who did not go along with that leap was being dishonest and unreasonable.
Ah yes, because it's so much more reasonable to risk a nuclear exchange over something quite harmless in comparison. Fuck off either way because this is just "I don't want to nuke countries, but if they keep building GPU clusters they'll force my hand" and pinning the blame for the nuclear strike on countries instead of, well, the person who ordered the strike.

In any case it appears that Schlomo thinks AI is worse than nukes so this sort of distinction is pretty fucking meaningless:
nukes1.png
source (a)
nukes2.png
source (a)
Zvi's post continues handwringing over "NOO PEOPLE ARE SAYING WE WANT VIOLENCE BUT WE DON'T":
I continue to urge everyone not to choose violence, in the sense that you should not go out there and commit any violence to try and cause or stop any AI-risk-related actions, nor should you seek to cause any other private citizen to do so. I am highly confident Eliezer would agree with this.

I would welcome at least some forms of laws and regulations aimed at reducing AI-related existential risks, or many other causes, that would be enforced via the United States Government, which enforces laws via the barrel of a gun. I would also welcome other countries enacting and enforcing such laws, also via the barrel of a gun, or international agreements between them.
If you've gotten into a place where you're enumerating basic libertarian/anarcho-capitalism/social contract political theory and what the difference is between police having guns and mafia having guns, something has gone deeply wrong here.

If you want a worldwide regulation that all countries follow then you'll need to negotiate with all of them and get their agreement to do so. The part everyone is pissed about is that you sidestep over this and then say "be willing to airstrike a rogue datacenter if necessary" as if people tolerate airstriking foreign countries who haven't agreed to this. Ideally you wouldn't say "do this or I airstrike you", they would just agree. It is true that the USG enforces laws via the barrel of a gun, but it does not enforce its laws in other countries where it doesn't have jurisdiction and the other country has not signed a treaty with. (Ideally, of course. It does otherwise all the time but people get rightfully pissed when it happens. The backlash from airstriking foreign datacenters would be immense.) The threat of airstriking is more of a way of coercion to an agreement than an actual mechanism for enforcing an agreement that has already been agreed upon.

It is mind boggling I have to explain this to self-professed """rationalists""". We're not done yet though, in the comments on Substack, Zvi writes:
So: Even if the CCP is purely selfish and cares not for any doom risk, as long as we can detect their defection it seems to me relatively easy to make it in their selfish incentive, given the incentives they care about, to play along.
It's not that they "cares not for any doom risk" it's that they see this whole thing is a fucking sham and the doomsday fearmongering is just doomsday fearmongering. No GPU clusters are going to spin out of control and delete the world. It's a far fetched idea with several propositions chained together that just makes no sense. Good fucking luck trying to get China on board, it won't happen and the alternative to an agreement is risking literal war for no good reason.
 
In any case it appears that Schlomo thinks AI is worse than nukes so this sort of distinction is pretty fucking meaningless:
If they send Schlomo Shekelstein here to eternal torment they're way BETTER than nukes. I for one welcome our AI overlords, and think they will bring us an autist-free utopia.
 
The very tweet that somehow "proves" ChatGPT is more than a text string completer and has some measure of self-awareness...instead proves that it isn't.
Thank god there are other people who notice this. It feels like everyone has gone fucking insane, including people who, unlike ol' Yud over here, have an actual education and really should know better. The thing is a glorified text completion tool.
 
Thank god there are other people who notice this. It feels like everyone has gone fucking insane, including people who, unlike ol' Yud over here, have an actual education and really should know better. The thing is a glorified text completion tool.
It's embarrassing that people view this idiotic asshole as some kind of expert when he's just a gigantic autist freaking out about a slightly more advanced chatterbot.
 
Back