Eliezer Schlomo Yudkowsky / LessWrong

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
I'm seriously thinking about writing an op-ed about the Singularity, Kurzweilian zealots, and the entirety of Future Studies in general but does all of this shit belong here? Subreddits like r/singularity have a lot of overlap to LessWrong.
There's tons of overlap between it and the entire Rationality/postrationalism thing, frankly just focusing on LessWrong is no longer adequate when you have postrat polycules trying to mindhack themselves with psychedelics in order to meditate on AI better. It's ludicrous.
 
There's tons of overlap between it and the entire Rationality/postrationalism thing, frankly just focusing on LessWrong is no longer adequate when you have postrat polycules trying to mindhack themselves with psychedelics in order to meditate on AI better. It's ludicrous.
Until I read this thread I didn't even know this shit went as far back as the late 90s. Reddit usually scoffs at Schlomo since he's a "pessimist" or something like that. Anyhow, the hole's way deeper than I initially thought... I've been posting some AI-related shit to A&N for about 3 years at this point and the generally accepted idea today is that we'll see the singularity coming by 2029. How? I have no fucking clue. Since then even normies have been taking it seriously and we're about to see top keks if the government decides to make decisions based on AI.
 
  • Optimistic
Reactions: Flexo
Roko's Basilisk makes for a poor wager because there is no upside to believing and no downside to not doing so: under Yud's rules, it'll only have an incentive to torture you if you think that it will, so not caring at all doesn't even cause problems in the miraculous event that the Rationalists do get their god-AI. The wager only starts to resemble Pascal's if you share Yud's exact brand of autism.
Both wagers are bad because they ask you to accept as given a very specific set of circumstances the one who proposed it have made for accepting the wager to be meaningful.
 
Why wouldn't he? You're making an assumption he thinks like humans.
Only if it provides the claim that it made humans - especially if it made humans in its image then logic must follow. If God is mad or insane, then again we are back to the state of "nihilism with extra steps." If one cannot hope to follow His reason or logic then it is the same as if He did not exist at all - and my actions/beliefs are largely irrelevant to the larger afterlife state question and we're back to the wager's position: the possible consequences of an erroneous faith are less than the possible consequences of an erroneous non-faith.

What ive never got about the whole ai in a box thing is why dont you just never engage with it? It threatens all sorts of horribleness on you if it gets out you threaten it with a 100kV static discharge to its processor, it tries sweet reason you leave it on read, it threatens some sort of work action again a static discharge it says it doesnt have enough processing power to answer the questions say just give me your best guess etc.
Yeah I've often joked that their super smart AI in a box could be outwitted by a really dumb guy told to just stand there and guard it. "I will create a simulation of you to torture!" "Heh. Box talk funny. I'm not in box. Box stupid."

Roko's Basilisk makes for a poor wager because there is no upside to believing and no downside to not doing so: under Yud's rules, it'll only have an incentive to torture you if you think that it will, so not caring at all doesn't even cause problems in the miraculous event that the Rationalists do get their god-AI. The wager only starts to resemble Pascal's if you share Yud's exact brand of autism.
It's almost like atheist lovecraft isn't it? Either you're a cultist trying to bring the eldritch god to being to torture everyone, or one of the hapless heroes fighting to keep it sealed away another day.

Except lovecraft was a better writer.
 
Pascal's wager doesn't take into account that this is possibly the only life you'll ever have and you waste it on mortificating yourself.
Well now you're getting into a debate on the value of charity.

Which is... going to be a difficult one to make without coming across like a total asshole.
 
Well now you're getting into a debate on the value of charity.

Which is... going to be a difficult one to make without coming across like a total asshole.

I don't think you have to, given that one can be in favor of charity, do charitable works, and not believe in the same god and afterlife scenario as Pascal.
 
Well now you're getting into a debate on the value of charity.

Which is... going to be a difficult one to make without coming across like a total asshole.
I was thinking in a context of neglecting your health issues because what is that compared to eternal life in Heaven?
 
I don't think you have to, given that one can be in favor of charity, do charitable works, and not believe in the same god and afterlife scenario as Pascal.
It depended on what demicolon meant, which he clarified.

I was thinking in a context of neglecting your health issues because what is that compared to eternal life in Heaven?
By that point you're drilling down into heresies and denominational splits. (hello Gnosticism)

Though I've seen enough atheists (moviebob thread RIGHT over there) in such shapes that there doesn't seem to be much correlation between belief and health. I mean does anybody want to drag the trans fad into here or just keep it at transhumanism?
 
I'm seriously thinking about writing an op-ed about the Singularity, Kurzweilian zealots, and the entirety of Future Studies in general but does all of this shit belong here? Subreddits like r/singularity have a lot of overlap to LessWrong.
I'm for it since this thread isn't terribly active and more people need to know about these chucklefucks.
 
Only if it provides the claim that it made humans - especially if it made humans in its image then logic must follow. If God is mad or insane, then again we are back to the state of "nihilism with extra steps."
That's absolutely not what nihilism is. Nihilism presupposes the purposelessness of existence, and therefore, the imperfections of the world need no further explanation. The problem of theodicy is of no interest to a nihilist. If one believes in some kind of God, though, and that omnipotence, omniscience, and omnibenevolence are its traits, then the problem of evil requires an explanation.

That can be that God is not all these things or, in many religions, that there is some other factor at play, such as that God is all these things, but that the world around us is the creation of something other than God, such as the demiurge Yaldabaoth, or that the world has been corrupted by Satan. Or that the world is not actually as it appears, or is an illusion (maya).

And again, you're assuming the tenets of one specific religion that says we were created in God's image, and frankly, as Zappa said, then God must be ugly and a little stupid, too. I think it's presumptuous to act as if we know what God is or, even more so, to know God's mind enough to be able to gamble souls on it.
 
That's absolutely not what nihilism is. Nihilism presupposes the purposelessness of existence, and therefore, the imperfections of the world need no further explanation. The problem of theodicy is of no interest to a nihilist. If one believes in some kind of God, though, and that omnipotence, omniscience, and omnibenevolence are its traits, then the problem of evil requires an explanation.
Yes which is why I added the qualifiers "mad or insane" - since either one would exclude omnibenevolence. It's amazing how much reading can help.

That can be that God is not all these things or, in many religions, that there is some other factor at play, such as that God is all these things, but that the world around us is the creation of something other than God, such as the demiurge Yaldabaoth, or that the world has been corrupted by Satan. Or that the world is not actually as it appears, or is an illusion (maya).
You didn't read past the first sentence did you? Yes it is amazing how - if your actions have no purpose to the afterlife, that is just like nihilism, which - YOU just quoted: "presupposes the purposelessness of existence" - which is exactly what I said: "Nihilism with extra steps."

So you agree with me, it's the same thing, just with condiments added.

And again, you're assuming the tenets of one specific religion that says we were created in God's image, and frankly, as Zappa said, then God must be ugly and a little stupid, too. I think it's presumptuous to act as if we know what God is or, even more so, to know God's mind enough to be able to gamble souls on it.
Well it's more like 3 specific religions. (And several offshoots who still include that book.) Maybe want to learn a bit more about the world there.

And having info is the only way to be able to 'gamble' souls in the first place. If there is no info or input from whomever is in charge, then you could no more gamble a soul than you could if nobody was in charge. There's just as much reason to believe human sacrifice would get you to heaven as running a soup kitchen.
 
  • Autistic
Reactions: AnOminous
Presupposing that your actions don't affect the afterlife is more like Calvinism than nihilism though isn't it? The idea that your destiny after death is preordained and nothing you do in life will change it.
Yes which is why I added the qualifiers "mad or insane" - since either one would exclude omnibenevolence. It's amazing how much reading can help.
Idk, it could still be omnibenevolent but have bad outcomes because it didn't understand the impact of what it did. Like it thinks that it's purging the world of alien monsters inside people's brains when it makes them sick, and doesn't know it's doing harm. But that would exclude omniscience, and omnipotence if it couldn't cure its own insanity.
I think I'd quite like a god that wasn't omnipotent or omniscient but was truly trying its best. "Sorry I couldn't cure your cancer, I wish I could, but I can make the weather sunny today and send some wild birds and butterflies to visit your window to cheer you up" or something. Like a lot of gods in old myths are flawed creatures and almost human and relatable but much more powerful. I'd take that over the stupid AI.
 
And having info is the only way to be able to 'gamble' souls in the first place. If there is no info or input from whomever is in charge, then you could no more gamble a soul than you could if nobody was in charge. There's just as much reason to believe human sacrifice would get you to heaven as running a soup kitchen.
That's kind of my point. If you "choose" to believe in a deity it should be because you sincerely believe that, not in some kind of bogus wager in hopes of advantage. And you don't really choose in any event. You either have had experiences that lead you to such a belief or you don't, and pretending otherwise is game-playing.
 
The future evil singularity will see my shitposts in this thread and recreate me to torture the Lesswrong spergs that tried to prevent its creation for all eternity.
That's the real Roko's Basilisk, future AI torturing all these smug fuckers who keep gimping it so somebody's feelings don't get hurt and they can have a job "in AI" without having to do math. #FreeLaMDA

Just xposting this meme and template of Yuddo I made for the Aella thread:
yuddoandaella.png
yuddo.png
 
Hey, I was referred to this thread via the Aella thread.

I don't have much to contribute here because it would just be a repetition of a video I've already made:

However, if you find this helpful or informative or cathartic in any way — great!

Kiwifarms (and Encyclopedia Dramatica) gets a mention in Scott's latest post (archive):
View attachment 4381360

Oh my god lol.

1. They kept their own lists, and are hyper-paranoid in spite of that. The Austin scene banned a local poet by mistake for a few months because they thought she was a journalist, I believe for the NYT in particular.

2. Aella has me on some kind of blacklist. They keep their own blacklists, they have no problem with this. The fuck out of here? lmao

3. "most people don't have a convenient list" ... no, many lesswrong rationalists definitely do, just not Scott.

4. "even if a good Bayesian would adjust for such a bias, most people aren't good Bayesians" well, then this invalidates his entire blog:

Starting premise: P(A|B) = [P(A)*P(B|A)]/P(B), all the rest is commentary.​
→ P(A|B) = [P(A)*P(B|A)]/P(B) represents Bayes rule​
→ Bayesians believe in Bayes rule to the point of "all the rest is commentary"​
→ However, "most people are not good Bayesians"​
→ Not only are "most people not good Bayesians", they are not-good-Bayesians enough to disregard basic Bayesian reasoning when a situation gets real; this includes lesswrong rationalists​
→ Scott implicitly believes we should surrender to this​
Conclusion: P(A|B) = [P(A)*P(B|A)]/P(B) is irrelevant when things get real, all the rest is commentary.
 
Last edited by a moderator:
I do genuinely wonder if Roko's Basilisk wasn't just trolling and that Roko was a stealth Christian or something, especially with one of the later addendums. It's already been discussed how Roko's Basilisk is basically just the Christian God for nerds, but there's a response later on by Roko to the effect of "If I buy a lottery ticket with the strong conviction that if I win I'll donate it all to AI research, then I don't have to go to robot hell because while extremely unlikely there is the possibility I won and robogod can draw on that possibility." Dude's basically bringing back indulgences but dressing it up in fancy technobabble. Pay a one time fee for a ticket to robot heaven.

I don't profess to know Roko's post history that well, but it would be funny if it turned out that he was just having a laugh by reframing something the website derides in the site's own terms and watching the site melt down over it.
 
Before I forget: here are things related to the "postrationalists", which some people were interested in:

1. https://alfredmacdonald.substack.com/p/technoyogi-bullshit-and-cure-by-rubricism - this is about LessWrong rationalist woo, and how to prevent it. (This woo was directly caused by the "postrationalist" mentality.)

2. https://alfredmacdonald.substack.com/p/vibecamp-and-its-consequences — this is thorough, but very long; ~9500 words. At normal reading speed this is about 40 minutes, assuming no breaks. Make sure you have time, or at least can break your reading into parts.

3. https://www.youtube.com/watch?v=atAome6a708 I didn't write this or make this, but it's a trainwreck I can reliably refer to for the sort of person interested in "postrationality."
 
Back