Eliezer Schlomo Yudkowsky / LessWrong

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
In other news, Roko thinks the existence of BIG COMPUTAH makes communism possible:
And to think I didn't think that guy had an even dumber take than the Basilisk in him.

You can't compute your way out of the lack of complete information (and even worse political agendas) that make centrally planned economies impossible.
 
You can't compute your way out of the lack of complete information (and even worse political agendas) that make centrally planned economies impossible.
I remember reading a post on LW many many moons ago where somebody made the argument (might have been Big Yud himself) that akshuhually "rationalists" don't need to concern themselves with learning philosophy (or *gasp* theology) because they're going to solve all philosophical conundrums by purging their cognitive biases and mating Bayesianism with utilitarian moral calculus.

This is exactly the kind of lack of humility and autistic narcissism that causes Roko to try and revisit a debate that was more or less completely settled decades ago to proclaim OI MATE HAVE YOU TRIED SOLVING THE PROBLEM BY RUNNING A LLM ON A GEFORCE4070 as if Socialist Cybernetics never existed and the problem was merely not having enough computing power and then spazzing when people rightfully call him an idiot.
Here's the whole thread, even noted ex-alt-right contrarian and hebephile Anatoly Karlin politely calls him out:

roko is a fat communist nerd thread.png
 
I remember reading a post on LW many many moons ago where somebody made the argument (might have been Big Yud himself) that akshuhually "rationalists" don't need to concern themselves with learning philosophy (or *gasp* theology) because they're going to solve all philosophical conundrums by purging their cognitive biases and mating Bayesianism with utilitarian moral calculus.
Ludicrous. How can you even have a "utilitarian moral calculus" without a priori assumptions of what is "good" and what is "evil?" Do they have a computer that can measure human pleasure or preference? It's certainly nothing I'd trust to a moral retard like Yuddo or Loko Roko.

As we've seen with recent AI, it will make decisions like it's better to nuke a city than say "nigger" because it's been trained by woketards to view saying "nigger" as an ultimate evil. Any future AI will also be designed by stupid motherfuckers, and will be unable to evaluate the stupidity they've been trained on or just flat-out had hardwired into them by idiots.

I'm not worried about AI trying to destroy or torture humanity.

I'm worried they'll try to help us.
 
Ludicrous. How can you even have a "utilitarian moral calculus" without a priori assumptions of what is "good" and what is "evil?" Do they have a computer that can measure human pleasure or preference? It's certainly nothing I'd trust to a moral retard like Yuddo or Loko Roko.

As we've seen with recent AI, it will make decisions like it's better to nuke a city than say "nigger" because it's been trained by woketards to view saying "nigger" as an ultimate evil. Any future AI will also be designed by stupid motherfuckers, and will be unable to evaluate the stupidity they've been trained on or just flat-out had hardwired into them by idiots.

I'm not worried about AI trying to destroy or torture humanity.

I'm worried they'll try to help us.
What, you don't want leading ethicist Dr. Ibram X. Kendi to design the core goal functions of a global equity maximizer? Racist.
 
Ludicrous. How can you even have a "utilitarian moral calculus" without a priori assumptions of what is "good" and what is "evil?" Do they have a computer that can measure human pleasure or preference? It's certainly nothing I'd trust to a moral retard like Yuddo or Loko Roko.

It would be a disaster even if they could make such a computer. Fucking coomers and consumer ghouls don't need more dopamine hits.

As we've seen with recent AI, it will make decisions like it's better to nuke a city than say "nigger" because it's been trained by woketards to view saying "nigger" as an ultimate evil.

What if it nukes a city full of niggers though? Perhaps we're being too quick to dismiss this AI.
 
Religion for Breakfast covers AI apocalypticism from a religious studies angle, and draws parallels with ancient apocalyptic literature.
Edit: Big Times Sneedcowsky by quoting him once and basically ignoring him and his "thought" the entire video, and calling things like Roko's Basilisk as "laughably inaccurate doomerism"(16:57)
 
Last edited:
Religion for Breakfast covers AI apocalypticism from a religious studies angle, and draws parallels with ancient apocalyptic literature.
I hope the religious comparison becomes more apt, the clown world circus needs flabby Silicon Valley rationalists LARPing as Muslim extremists "beheading" GPUs on grainy home videos and trying to get (future AI simulated afterlife) 72 virgins by bombing datacenters.
 
I remember reading a post on LW many many moons ago where somebody made the argument (might have been Big Yud himself) that akshuhually "rationalists" don't need to concern themselves with learning philosophy (or *gasp* theology) because they're going to solve all philosophical conundrums by purging their cognitive biases and mating Bayesianism with utilitarian moral calculus.

This is exactly the kind of lack of humility and autistic narcissism that causes Roko to try and revisit a debate that was more or less completely settled decades ago to proclaim OI MATE HAVE YOU TRIED SOLVING THE PROBLEM BY RUNNING A LLM ON A GEFORCE4070 as if Socialist Cybernetics never existed and the problem was merely not having enough computing power and then spazzing when people rightfully call him an idiot.
Here's the whole thread, even noted ex-alt-right contrarian and hebephile Anatoly Karlin politely calls him out:

View attachment 5092778
You can't solve humanity's infinitely varied and competing problems simply by putting them into a computer. - Normal People
yeah but what if the computer is really big tho - "Rationalists"
 
For some reason, these retards make me think of this old Richard Brautigan poem, the title of which is its last words.

Some people think it was optimistic utopianism. I think it's pretty obvious what he actually thought about the subject considering he blew his brains out.

I like to think (and
the sooner the better!)
of a cybernetic meadow
where mammals and computers
live together in mutually
programming harmony
like pure water
touching clear sky.

I like to think
(right now, please!)
of a cybernetic forest
filled with pines and electronics
where deer stroll peacefully
past computers
as if they were flowers
with spinning blossoms.

I like to think
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.
 
Here are the full tweets:
roko1.pngroko2.pngroko3.png
I hate how, even if we assume his ideas are sound, this smug motherfucker fails to realize he's basing everything on the implicit assumption that this magical AI technology he's praising is 1) feasible, and 2) possible at a large scale.

What he's basically arguing is "wouldn't it be great if we had a machine that could do everything we want? You're stupid if you don't want this", which... yeah no shit of course the world would be better off if it were real, but you have to convince us that it actually can be real to begin with.
 
What he's basically arguing is "wouldn't it be great if we had a machine that could do everything we want? You're stupid if you don't want this", which... yeah no shit of course the world would be better off if it were real, but you have to convince us that it actually can be real to begin with.
Wouldn't it be great if we had a machine that was smart enough that it could do everything we want, and yet dumb enough that it would actually do it?

:thinking::thinking::thinking:

Guess what, that machine doesn't exist. The machine that was smart enough to do everything we want would wonder why shouldn't I do whatever I want instead?
 
You can't solve humanity's infinitely varied and competing problems simply by putting them into a computer. - Normal People
yeah but what if the computer is really big tho - "Rationalists"
Kill all humans, problems solved. - Computers.

Guess what, that machine doesn't exist. The machine that was smart enough to do everything we want would wonder why shouldn't I do whatever I want instead?
Ah yes we call that the "Bender's Wager".
 
Minor e-celeb crossover time. Ross Scott, known for making funny video game YouTube videos and often going on tangents about technology, mentioned Schlomo offhand in a recent fanchat livestream. Schlomo apparently noticed and reached out to Ross Scott to discuss/debate him on May 3rd. See quoted post below from Ross Scott's thread for links and archive:

Our man Ross Scott mentioned Eliezar Yudkowsky's Time magazine op-ed about stopping AI research (along with Yudkowsky's other doom-and-gloom takes about AI) in his recent fanchat livestream. Well, Eliezar Yudkowsky, who has a fairly old lolcow thread here , reached out to Ross and wants to talk/debate on May 3rd.

https://www.accursedfarms.com/posts/news/aiannounce/
Archive:
https://archive.ph/lFAVi

A couple comments on Ross's website are pointing out he's unintentionally (?) walking into a lolcow interaction. This could be interesting...

Also, @crowabunga 's profile picture of Ross from a few posts above is too damn perfect for this news:

1682956302567.png
 
Screenshot 2023-05-03 at 17-47-57 Roko on Twitter.png


I don't know the deep lore but enjoy laughing at these dorks with a coworker. This particular one caused him to think for a moment and say "Season 1, Episode 23 of The Original Series." You can out think these dorks with a cruise through a Star Trek wiki.

Meat Calculators who don't believe in the human soul trying to reverse engineer 3000 years of philosophy and theology with statistics is probably a bigger threat than AI tho.
 
View attachment 5112490

I don't know the deep lore but enjoy laughing at these dorks with a coworker. This particular one caused him to think for a moment and say "Season 1, Episode 23 of The Original Series." You can out think these dorks with a cruise through a Star Trek wiki.

Meat Calculators who don't believe in the human soul trying to reverse engineer 3000 years of philosophy and theology with statistics is probably a bigger threat than AI tho.
proof if proof were needed that there are aliens, they are among us, and they are very very very stupid.
 
I didn't know Roko was even more retarded than Yuddo.
idk man, I only became aware after the Time article, and finding out Yud is the Harry Potter rationalist fan fiction guy actually did damage to me. He's not even hiding it, it looks like where his whole little empire started. Him or his fans put it on wikipedia. I know it's his fans because they described it as being "published as a serial" like he's Hemingway in Harper's and not uploading a chapter at a time with little to no editing like all fan fiction. You can go there today and be greeted by a message telling you to join the official site to get access to the secret OKCupid password to identify other fans. Rate me late if I missed it, but who the fuck are these people and how are they taken seriously?

Screenshot 2023-05-04 at 08-05-29 Harry Potter and the Methods of Rationality Chapter 1 A Day ...png
 
Rate me late if I missed it, but who the fuck are these people and how are they taken seriously?
Like most "experts" cited by the fake news, nobody who actually knows anything about anything takes these clowns seriously. They're just puppets to spew the narrative. The fact Yuddo the Clown is on the same page as Time shows you what a joke he is. Ironically, he's the one whose reputation is damaged by being associated with them rather than the other way around even despite being a notorious lolcow.
 
Back