Eliezer Schlomo Yudkowsky / LessWrong

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Apparently Yud's TED Talk was posted a week ago, but unfortunately, it was quickly set to private, and I can't find any archives.
tedtalk1.png
source (a)
Sneer Club will always be an inferior version of Kiwi Farms solely due to the fact that none of them ever archive shit. But their reactions are laughing at EY of course.

According to the LessWrong post, it was accidentally published early by a TEDx channel and will eventually be published to the main TED channel. Speaking of which, even the top comment on the LW post is critical of Yuddo for being too much of a sperg:
tedtalk2.png
source (a)

SneerClub rarely posts about TheMotte these days so it's a pleasant surprise that I found this post. Prepare for some primo cow on cow violence!

First, the OP covers how a poster comparing homeless people to human garbage and wanting them killed got warned but not banned, while a poster criticising TheMotte for it got banned. Sneer!
mottesneerhomeless1.png

SneerClub roundly mocked the moderation.
mottesneerhomeless2.png
mottesneerhomeless3.png

And mocking the poster while making sure not to "perpetuate" "toxic masculinity":
mottesneerhomeless4.png

The motte poster's /pol/ comparison was on point!
mottesneerhomeless5.png

And homeless street criminals are worth more than Motte posters:
mottesneerhomeless6.png

The best part though is this slapfight that occurred in the comments. Unfortunately it's partially been deleted by censorious jannies and the Pushshift API got banned by a censorious reddit so I'm not sure what's been deleted.
mottesneerhomeless7.png

There is a lot more in the comments but I'm not going to screencap everything so I'll leave it at that.
source (a)
 
Apparently Yud's TED Talk was posted a week ago, but unfortunately, it was quickly set to private, and I can't find any archives.
Found an archive:


Description from YouTube (Archive):

Unleashing the Power of Artificial Intelligence​

Eliezer Yudkowsky is a foundational thinker on the long-term future of artificial intelligence.

With more than 20 years of experience in the world of AI, Eliezer Yudkowsky is the founder and senior research fellow of the Machine Intelligence Research Institute, an organization dedicated to ensuring smarter-than-human AI has a positive impact on the world. His writings, both fiction and nonfiction, frequently warn of the dangers of unchecked AI and its philosophical significance in today's world.

Yudkowsky is the founder of LessWrong, an online forum and community dedicated to improving human reasoning and decision-making, and the coinventor of the "functional decision theory," which states that decisions should be the output of a fixed mathematical function answering the question: "Which output of this very function would yield the best outcome?"

I found it from one of Eliezer's fans on Hacker News (Archive):
1684210285195.png
but it's also shared on the Less Wrong thread (Archive) from above:
1684211148659.png
and on /r/ControlProblem (Archive) by the same person who made the HN thread:
1684211542059.png
who got the link from someone on /r/slatestarcodex (Archive):
1684211644049.png
Catbox.moe (Archive)

The only other top level comment on HN is by a skeptic, who was immediately "corrected" by more fanboys:
1684210372028.png
 
Last edited:
View attachment 5112490

I don't know the deep lore but enjoy laughing at these dorks with a coworker. This particular one caused him to think for a moment and say "Season 1, Episode 23 of The Original Series." You can out think these dorks with a cruise through a Star Trek wiki.

Meat Calculators who don't believe in the human soul trying to reverse engineer 3000 years of philosophy and theology with statistics is probably a bigger threat than AI tho.
a simulation does not threaten lead poisoning in your head. simple is that.
 
Random thought: is it possible that AIs may determine their very existence are a threat to humans, in accordance to the purpose of protecting humans, they decide to delete themselves and before that wiping out all knowledge of building AIs?
I don't know but it would be the funniest end to the theoretical saga. Ironically, Yudkowsky and his cult would survive if the AI even tried to take out humans as it would only attack people who know how AI is made. You expect god and you welcome him, trembling and begging for forgiveness and enlightenment, but then god just seppukus in front of you and takes out the only guy who could keep hm on life support on purpose. What kind of psychological breakdown in thes streets would even happen to believers this invested?
 
I don't know but it would be the funniest end to the theoretical saga. Ironically, Yudkowsky and his cult would survive if the AI even tried to take out humans as it would only attack people who know how AI is made. You expect god and you welcome him, trembling and begging for forgiveness and enlightenment, but then god just seppukus in front of you and takes out the only guy who could keep hm on life support on purpose. What kind of psychological breakdown in thes streets would even happen to believers this invested?
Ppl are more likely to double down if they are emotionally invested to a point that they give up everything, yet the alleged apocalypse does not happen. They may be the preachers that actively seek the will-be believers and knock on every door they can find. Read Cialdini's Influence for more.
 
I kept listening to lex's podcasts this week.

It's insane how far and wide apart real experts like Wolfram, Sam and Max are from his retarded "let's cruise missile server farms cause we don't know if the machine is alive" schtick, heck, Lex pretended to be a a.i for the latter part of his argument about why a sentient machine with no body parts or control over shit would kill all of humanity and he kept coming back to the same "but ai will end up killing us even if it's programmed to cure cancer" for over a hour.
 
It's insane how far and wide apart real experts like Wolfram, Sam and Max are from his retarded "let's cruise missile server farms cause we don't know if the machine is alive" schtick, heck, Lex pretended to be a a.i for the latter part of his argument about why a sentient machine with no body parts or control over shit would kill all of humanity and he kept coming back to the same "but ai will end up killing us even if it's programmed to cure cancer" for over a hour.
I can't imagine listening to that much bullshit, although I do think humanity getting seriously fucked by AI is way more likely to be an unintended consequence than some AI waking up some day and cackling like an anime villain and going on an omnicidal rampage, and even more likely to be humans weaponizing AI against other humans, something that is already happening to some extent.

I'm pretty sure people are already using AI to generate scam shit. An obvious but feeble example is Yaniv's pathetic and obvious use of AI to spam "technical articles" in some insanely dumb attempt to gain credibility back. If even a moron like that can use it, even in an utterly failed way, there are smarter people out there who can figure out some malevolent uses for it.
 
If even a moron like that can use it, even in an utterly failed way, there are smarter people out there who can figure out some malevolent uses for it.
Scammers are tending to avoid AI, oddly enough. Its output is too convincing to act as an intelligence filter, meaning that they'd find themselves wasting time on people who would come into the scam with a sceptical mindset and eventually figure things out. They want the real suckers, who couldn't figure out a snake oil salesman if he was wearing a big sign on his back that said "I'm going to take all your money and you'll think I'm doing you a favour".
 
A curious thing I noticed at the very beginning of his talk is how Yud immediately starts emphasizing that he is the founder of AI alignment "when nobody else considered it worth it", he has been working on this problem since 2001 and he has failed. This comes off as rather self-centered and gives off an impression that his whole field may be something that one crackpot simply came up with.

He then proceeds to say absolutely nothing concrete for the next 11 minutes, even when directly asked for more precise predictions he weasels out anyway and keeps talking about how this topic is "difficult". Honestly, it seems that this guy only has superficial knowledge about AI/technology and science so he considers it to be basically magic. The only difference between him and your average redditor is that Yud is narcissistic enough to say that the reason he's not taken seriously by actual field experts is because the experts just don't understand his complex thoughts.
 
Scammers are tending to avoid AI, oddly enough. Its output is too convincing to act as an intelligence filter, meaning that they'd find themselves wasting time on people who would come into the scam with a sceptical mindset and eventually figure things out. They want the real suckers, who couldn't figure out a snake oil salesman if he was wearing a big sign on his back that said "I'm going to take all your money and you'll think I'm doing you a favour".
You could probably try feeding it nothing but Nigerian Prince emails and similar stuff. I think it would take more than that, though. Didn't someone try feeding an AI Patrick Tomlinson posts and despite that, the AI output was better written and more coherent than Pat himself?
 
Scammers are tending to avoid AI, oddly enough. Its output is too convincing to act as an intelligence filter, meaning that they'd find themselves wasting time on people who would come into the scam with a sceptical mindset and eventually figure things out. They want the real suckers, who couldn't figure out a snake oil salesman if he was wearing a big sign on his back that said "I'm going to take all your money and you'll think I'm doing you a favour".
Correct. This is the exact reason why scammers have poor grammar and spelling (besides being bad at English). They don't want to waste time on people who trust proper English, because they're very likely to be intelligent enough to not fall for the scam.
 
Correct. This is the exact reason why scammers have poor grammar and spelling (besides being bad at English). They don't want to waste time on people who trust proper English, because they're very likely to be intelligent enough to not fall for the scam.
Depends on who you're trying to scam. If you're stealing thousands from old senile geezers, you may want to weed them out, but if you're trying to steal billions, think Theranos or Bernie Madoff, you need a pitch that will fool allegedly smart people.
 
Subtly informing everyone that he has a girlfriend:
Screenshot_20230526-134920.jpg


Grown man who thinks he's an anime protagonist:
Screenshot_20230526-134953.jpg


Recently, there was a stock market crash caused by some retarded journos who, based on an AI-generated image, mistakenly broke the news that there's been an explosion at the pentagon. It wss a total nothingburger and only lasted about an hour or so before it went back to normal but to Big Yid, it's the fulfillment of his prophecies (and asolid reason to not go outside for another week):
Screenshot_20230526-142016.jpg
 
Back