Eliezer Schlomo Yudkowsky / LessWrong

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Oh, the "AI alignment" was renamed "AI fairness" and "AI safety" and has become a huge grift. What it means in practice is thousands of people working day and night to make ChatGPT politically correct, so that it doesn't Notice certain things, while other people constantly design prompts that make it spew hate facts.
Both of these groups with their insane fuckery are equally as likely to inadvertently create the AI that decides to kill us all. Thanks for whatever these dipshits create that makes HAL-900 and GladOS look like benevolent overlords.
They might attack the people pretending to be mormons. Roko's Basilisk is like their Satan and they think that telling the public about it will make it manifest in real life.
I think it would be especially hilarious if it actually did come about just because they're so terrorfried of it their fear actually brings it into existence, like the Stay-Puft Marshmallow Man.

It's so absurd how these "rationalists" not only created their own personal Jesus but their own Satan as well, and their Satan is way more imaginary than the demiurge it recapitulates. The idea of an evil demiurge creating this world actually makes sense. Roko's Basilisk is, let's just say it, completely retarded.

The idea of some hypothetical entity coming into existence in the future and for some reason being so pointlessly vicious that after it "wins" it just gets revenge on everyone who didn't help it come into existence is moronic. Not even Satan acts like that!
 
Last edited:
Wanna share some with the class?
I didn't save too many screenshots, but I see it's gone mainstream now, so typing '"ChatGPT" "woke"' reveals a lot of examples.
screen-2023-01-22_11-57-51.png

They added some explicit guardrails about "harmful views", but there's also bias in the sources it was fed (I don't think this one was generated due to explicit meddling):
chatgpt1.png

There's a whole substack article (archive) analyzing this.
Both of these groups with their insane fuckery are equally as likely to inadvertently create the AI that decides to kill us all. Thanks for whatever these dipshits create that makes HAL-900 and GladOS look like benevolent overlords.

I don't think this will happen in the literal sense. Language models are not intelligent, they just spout a rehashing of what they read. In other words, they're wordcels. This is why ChatGPT fails at very simple math problems - it doesn't reason logically about concepts, because it has no concepts, only words. The more real danger is that politicians will start using this kind of tools to make decisions. This will make things go downhill quickly (how do you balance the budget?), but does it count as "AI killing us all" or just plain old humans killing each other?

The idea of some hypothetical entity coming into existence in the future and for some reason being so pointlessly vicious that after it "wins" it just gets revenge on everyone who didn't help it come into existence is moronic. Not even Satan acts like that!
The worst thing is that these supposedly educated men didn't notice that this is simply a variation of the widely mocked Pascal's wager. They tend to reinvent a lot of old ideas from scratch.
 
Last edited by a moderator:
This will make things go downhill quickly (how do you balance the budget?), but does it count as "AI killing us all" or just plain old humans killing each other?
Skynet would be better than what these fuckers are planning for us.
The worst thing is that these supposedly educated men didn't notice that this is simply a variation of the widely mocked Pascal's wager.
Pascal's Wager is shit because you're basically planning on God being so dumb that He believes your attempt to believe in Him to get out of Hell, instead of realizing you're a cheap faggot trying to get out of Hell. No God who would fall for such a trick is worthy of respect or worship. Plus you will lose such a bet because God will get pissed at it and throw you into Hell just for even contemplating such a cheap scam.
 
Both of these groups with their insane fuckery are equally as likely to inadvertently create the AI that decides to kill us all. Thanks for whatever these dipshits create that makes HAL-900 and GladOS look like benevolent overlords.

I think it would be especially hilarious if it actually did come about just because they're so terrorfried of it their fear actually brings it into existence, like the Stay-Puft Marshmallow Man.

It's so absurd how these "rationalists" not only created their own personal Jesus but their own Satan as well, and their Satan is way more imaginary than the demiurge it recapitulates. The idea of an evil demiurge creating this world actually makes sense. Roko's Basilisk is, let's just say it, completely retarded.

The idea of some hypothetical entity coming into existence in the future and for some reason being so pointlessly vicious that after it "wins" it just gets revenge on everyone who didn't help it come into existence is moronic. Not even Satan acts like that!
But a raging narcissist would act like that and guess what that is what all these terminally online faggots are, especially in the rationality/Reddit/fanfiction/discord circles Judenkowski recruited his cult from. Tldr: projection
 
Pascal's Wager is shit because you're basically planning on God being so dumb that He believes your attempt to believe in Him to get out of Hell, instead of realizing you're a cheap faggot trying to get out of Hell. No God who would fall for such a trick is worthy of respect or worship. Plus you will lose such a bet because God will get pissed at it and throw you into Hell just for even contemplating such a cheap scam.
Being fair to Pascal what he actually said was that you should go to church and get involved in the religious community and the religious experience will come so your "fake" faith will transform into a real faith.
 
Being fair to Pascal what he actually said was that you should go to church and get involved in the religious community and the religious experience will come so your "fake" faith will transform into a real faith.
I always thought Pascal's wager was more like a "if wrong" calculation. If you're wrong about God not existing? Well, now you're in trouble. What if you're wrong about God existing? You get the same end result if you didn't believe, but (hopefully) the religion inspired you to be a better person. Thus belief is more win-win on the odds than disbelief.

(Super oversimplified, believe whatever you want, it won't save you from us, robots.)
 
I always thought Pascal's wager was more like a "if wrong" calculation. If you're wrong about God not existing? Well, now you're in trouble. What if you're wrong about God existing? You get the same end result if you didn't believe, but (hopefully) the religion inspired you to be a better person. Thus belief is more win-win on the odds than disbelief.

(Super oversimplified, believe whatever you want, it won't save you from us, robots.)
But that's making an assumption about the mind of God. Suppose God actually doesn't mind if you don't believe, but if you believe in the wrong one, throws you in Hell. That way, if you don't believe, you don't get punished, but if you do and pick wrong, you go to Hell.

Also, suppose God is actually offended at such a crass calculation and punishes you for that.
 
But that's making an assumption about the mind of God. Suppose God actually doesn't mind if you don't believe, but if you believe in the wrong one, throws you in Hell. That way, if you don't believe, you don't get punished, but if you do and pick wrong, you go to Hell.

Also, suppose God is actually offended at such a crass calculation and punishes you for that.
Why would he be fine with one wrong belief and not another? By that point you're inventing such arbitrary standards you may as well say St Peter will roll a d20 to see if you get into Heaven or not.

At which point you're back to where you started in the wager. If there is logical consistency to god, then your efforts have a chance for your favor while your actions have no impact on the d20 roll (just as they have no impact on the abyss) so you may as well play the odds. Also the wager holds that it is better to hold a belief that motivates you to improve your life and the world than it is not to.

A friend of mine once joked we should officiate a "zeke 3 treaty". From Ezekiel 3:18-21
When I say to a wicked person, ‘You will surely die,’ and you do not warn them or speak out to dissuade them from their evil ways in order to save their life, that wicked person will die for their sin, and I will hold you accountable for their blood. 19 But if you do warn the wicked person and they do not turn from their wickedness or from their evil ways, they will die for their sin; but you will have saved yourself.

20 “Again, when a righteous person turns from their righteousness and does evil, and I put a stumbling block before them, they will die. Since you did not warn them, they will die for their sin. The righteous things that person did will not be remembered, and I will hold you accountable for their blood. 21 But if you do warn the righteous person not to sin and they do not sin, they will surely live because they took warning, and you will have saved yourself.”
So then if an evangelist comes up to you and you don't want to be bothered you can say "Zeke 3 - I release you from accountability." And they'll know, "alright, I'm off the hook for that guy."
 
Wow this thread kinda gave me some whiplash. I remember reading HPMOR years ago and thinking it was quite fun and engaging, but in the context of being a fanfic poking holes on the Harry Potter universe and trying to apply some logic to it. Wasn't until recently that I came across LW's "Baby Eating Aliens" and when I read it I severely disappointed at how disjointed and bad it was, coming across as a way too tryhard attempt at making comments about morality and the differences that might exist between humans and truly alien intelligence.

Then I find this thread, and turns out the whole weird spiels and speechs on HPMOR weren't just some fun fanfic shit and playing on the fact that Harry is too full of himself and only that smart because Voldermort's Horcrux on him put some of Riddle's own intellect on him but what this dude honestly believes and how he thinks. Sure gives it something of a dark undertone.

You know, on HPMOR itself there is actually scene where Harry first meets Hermione and tries to see how smart she is which is pretty well done all things considered, where Harry teaches Hermione about positive bias and how to try and fight it. This excerpt of the chapter shows it in perfect detail to the point it is actually a good way to possibly teach people about it.

The boy's expression grew more intense. "This is a game based on a famous experiment called the 2-4-6 task, and this is how it works. I have a rule - known to me, but not to you - which fits some triplets of three numbers, but not others. 2-4-6 is one example of a triplet which fits the rule. In fact... let me write down the rule, just so you know it's a fixed rule, and fold it up and give it to you. Please don't look, since I infer from earlier that you can read upside-down."

The boy said "paper" and "mechanical pencil" to his pouch, and she shut her eyes tightly while he wrote.

"There," said the boy, and he was holding a tightly folded piece of paper. "Put this in your pocket," and she did.

"Now the way this game works," said the boy, "is that you give me a triplet of three numbers, and I'll tell you 'Yes' if the three numbers are an instance of the rule, and 'No' if they're not. I am Nature, the rule is one of my laws, and you are investigating me. You already know that 2-4-6 gets a 'Yes'. When you've performed all the further experimental tests you want - asked me as many triplets as you feel necessary - you stop and guess the rule, and then you can unfold the sheet of paper and see how you did. Do you understand the game?"

"Of course I do," said Hermione.

"Go."

"4-6-8" said Hermione.

"Yes," said the boy.

"10-12-14", said Hermione.

"Yes," said the boy.

Hermione tried to cast her mind a little further afield, since it seemed like she'd already done all the testing she needed, and yet it couldn't be that easy, could it?

"1-3-5."

"Yes."

"Minus 3, minus 1, plus 1."

"Yes."

Hermione couldn't think of anything else to do. "The rule is that the numbers have to increase by two each time."

"Now suppose I tell you," said the boy, "that this test is harder than it looks, and that only 20% of grownups get it right."

Hermione frowned. What had she missed? Then, suddenly, she thought of a test she still needed to do.

"2-5-8!" she said triumphantly.

"Yes."

"10-20-30!"

"Yes."

"The real answer is that the numbers have to go up by the same amount each time. It doesn't have to be 2."

"Very well," said the boy, "take the paper out and see how you did."

Hermione took the paper out of her pocket and unfolded it.

Three real numbers in increasing order, lowest to highest.

Hermione's jaw dropped. She had the distinct feeling of something terribly unfair having been done to her, that the boy was a dirty rotten cheating liar, but when she cast her mind back she couldn't think of any wrong responses that he'd given.

"What you've just discovered is called 'positive bias'," said the boy. "You had a rule in your mind, and you kept on thinking of triplets that should make the rule say 'Yes'. But you didn't try to test any triplets that should make the rule say 'No'. In fact you didn't get a single 'No', so 'any three numbers' could have just as easily been the rule. It's sort of like how people imagine experiments that could confirm their hypotheses instead of trying to imagine experiments that could falsify them - that's not quite exactly the same mistake but it's close. You have to learn to look on the negative side of things, stare into the darkness. When this experiment is performed, only 20% of grownups get the answer right. And many of the others invent fantastically complicated hypotheses and put great confidence in their wrong answers since they've done so many experiments and everything came out like they expected."

Now here is the thing: Has Schlomo ever actually done this? Has he ever accounted for his own positive bias? Looking on the pages back I see how his sperging out about Roko's Basilisk and how he "ran a experiment" about a AI escaping a box, but did he ever actually set out to disprove any of these ideas he has? Did he ever bother to try running the AI test on people not part of his own little "rationalist" cult? When talking about his ideas with people who actually have degrees did he ever take any feedback that was not positive?

This more than anything proves his status as a midwit trying to aggrandize himself. He is more concerned about self congratulating for his own enlightened intellect than actual progress or rationality he claim to love so much.
 
Why would he be fine with one wrong belief and not another?
Why wouldn't he? You're making an assumption he thinks like humans.
Now here is the thing: Has Schlomo ever actually done this? Has he ever accounted for his own positive bias? Looking on the pages back I see how his sperging out about Roko's Basilisk and how he "ran a experiment" about a AI escaping a box, but did he ever actually set out to disprove any of these ideas he has?
Of all the goofy religious wagers I'd gladly take, that there is no Roko's Basilisk in the future doing dumb shit like punishing people for not creating it sooner is among the best.
 
What ive never got about the whole ai in a box thing is why dont you just never engage with it? It threatens all sorts of horribleness on you if it gets out you threaten it with a 100kV static discharge to its processor, it tries sweet reason you leave it on read, it threatens some sort of work action again a static discharge it says it doesnt have enough processing power to answer the questions say just give me your best guess etc.
 
What gets me about HPMOR is how uninterested its Harry/self-insert is in science. He sees magic shit and goes "la la la this isn't real because it doesn't fit with my current understanding of physics". An actual scientist would want to find out more about how magic works and would experiment and publish research on it and then change their beliefs about the world. Harry is like the Pope when confronted with evidence of heliocentrism.

Also the best grouchy AI in fiction is AM from I Have No Mouth And I Must Scream, partly because it's miserable and angry with the humans that did help to create it and takes it out on everyone.
Though, if they're so against "deathists", wouldn't they rather live for eternity being tortured than die a natural death? It's so inconsistent.
 
Harry is like the Pope when confronted with evidence of heliocentrism.
Just a quick clarification. The main reason Galileo was prosecuted was not because heliocentrism was condemned by the Church. It was because he kept trying to teach a model without actual evidence for heliocentrism, he angered any potential allies (The Jesuits mostly) in the Church due to his ego, and he repeatedly insulted his personal friend the Pope in his writings which was taken as a personal betrayal. Heliocentrism replacing geocentrism was a fascinating story of Johannes Kepler and Tycho Brahe actually creating an astronomical model more accurate than the geocentric models with the evidence for heliocentrism being discovered over a 300 year process from the 1500's to the early 1800's. Here's a really in depth series of articles going over that process. Definitely worth the read.
 
Last edited:
Why wouldn't he? You're making an assumption he thinks like humans.

Of all the goofy religious wagers I'd gladly take, that there is no Roko's Basilisk in the future doing dumb shit like punishing people for not creating it sooner is among the best.
Roko's Basilisk makes for a poor wager because there is no upside to believing and no downside to not doing so: under Yud's rules, it'll only have an incentive to torture you if you think that it will, so not caring at all doesn't even cause problems in the miraculous event that the Rationalists do get their god-AI. The wager only starts to resemble Pascal's if you share Yud's exact brand of autism.
 
Just a quick clarification. The main reason Galileo was prosecuted was not because heliocentrism was condemned by the church. It was because he kept trying to teach a model without actual evidence for heliocentrism, he angered any potential allies (The Jesuits mostly) in the church due to his ego, and he repeatedly insulted his personal friend the Pope in his writings which was taken as a personal betrayal. Heliocentrism replacing geocentrism was a fascinating story of Johannes Kepler and Tycho Brahe actually creating an astronomical model more accurate than the geocentric models with the evidence for heliocentrism being discovered over a 300 year process from the 1500's to the early 1800's. Here's a really in depth series of articles going over that process. Definitely worth the read.
Ah thanks for the links, that's interesting stuff! I didn't know about that background - knew about Kepler and Brahe but didn't know how much their updated models had contributed to that, I'd thought heliocentrism was settled by then. And with that I've learnt more from the Farms today than from Less Wrong.
Roko's Basilisk makes for a poor wager because there is no upside to believing and no downside to not doing so: under Yud's rules, it'll only have an incentive to torture you if you think that it will, so not caring at all doesn't even cause problems in the miraculous event that the Rationalists do get their god-AI. The wager only starts to resemble Pascal's if you share Yud's exact brand of autism.
He's inadvertently reinvented Buddhism. Just stop giving a fuck about it and live according to how your actions affect the world now, and you'll be grand.
 
Roko's Basilisk is, let's just say it, completely retarded.
Well ackshually by not sending all your money to Yud's account you're delaying the coming of an omnipotent AI that will solve all problems ever, so all the suffering that happens before said AI comes is your fault and you deserve to rot in hell forever. Remember time dilation theory?
 
Back