Eliezer Schlomo Yudkowsky / LessWrong

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Did he ever explain why you should care that a digital clone of you is getting tortured? Or is he worried this AI is going to stop him from getting unfroze?

I have never seen any evidence of this but my p(FakeNews) is pretty low, so I'm pretty confident in leaning on my priors. Yudd thinks on the level of a Star Trek writer, that's his frame of reference. In Star Trek the teleporters work by dissembling the person down to the atom, scanning that data, and beaming it down to the destination and being reassembled. Of the philosophical questions raised about if what ever would arrive at the destination would be you the argument was made that as long as you were reassembled perfectly there would be a continuation of conscious.

And that's Yudd's frame. That the machine god will clone him so hard he, the he that he is, will feel the pain. If the AI doesn't clone him he just becomes dust maaan, I guess.
That's an interesting (read: retarded enough that Yudd would be scared of it) possibility too.

However, I think his specific concern was that you, as an individual, would never know if you were currently a simulated consciousness or not.

So you'd be like "oh shit, is this real life or is this just fantasy" and then start crying and break down and start working for the cult of the basilisk just in case you were in a simulation and therefore might end up getting tortured.

Like right now he thinks he might be in a simulation. In twenty years they'll start torturing everyone who knows about the basilisk but isn't working right now to advance its cause. But it's worthless to torture the normies who don't know about it.
 
Like right now he thinks he might be in a simulation. In twenty years they'll start torturing everyone who knows about the basilisk but isn't working right now to advance its cause. But it's worthless to torture the normies who don't know about it.
I generally think of it as a profoundly autistic cyberpunk version of the tape from The Ring.
 
I did just look around, and on the wikipedia page it states yud does not believe int he basilisks but it is close in idea space to dangerous ideas. It seems he doesn’t want this discussed because he doesn’t want to give AI ideas. Those similar ideas could be AI messing with him in meatspace not a simulation. For example, AI remotely thawing Eliezer Yudkowsky’s cryonics prematurely as retaliation for his work on AI safety or to prevent his resurrection to meat space because it would see him as a threat. I wonder if the discussion if these topics on a forum would alarm him in any way.
This dumb fuck doesn't want to give ideas to a Superintelligent AI? wat. Like I get the whole people who "believe" in science don't actually believe in science(ie think through the logical consequences of what the science says, esp. if its politically inconvenient) but this really fucking takes the cake of retardation, in fact its so retarded I think you must be mistaken got a source?
 
Last edited:
This dumb fuck doesn't want to give ideas to a Superintelligent AI? wat. Like I get the whole people who "believe" in science don't actually believe in science(ie think through the logical consequences of what the science says, esp. if its politically inconvenient) but this really fucking takes the cake of retardation, in fact its so retarded I think you must be mistaken got a source?
Possible my retard brain misinterpreted this.

Eliezer:
[Roko:] "One might think that the possibility of CEV punishing people couldn't possibly be taken seriously enough by anyone to actually motivate them. But in fact one person at SIAI was severely worried by this, to the point of having terrible nightmares, though ve wishes to remain anonymous."


I don't usually talk like this, but I'm going to make an exception for this case.


Listen to me very closely, you idiot.


YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.


There's an obvious equilibrium to this problem where you engage in all positive acausal trades and ignore all attempts at acausal blackmail. Until we have a better worked-out version of TDT and we can prove that formally, it should just be OBVIOUS that you DO NOT THINK ABOUT DISTANT BLACKMAILERS in SUFFICIENT DETAIL that they have a motive toACTUALLY [sic] BLACKMAIL YOU.


If there is any part of this acausal trade that is positive-sum and actually worth doing, that is exactly the sort of thing you leave up to an FAI. We probably also have the FAI take actions that cancel out the impact of anyone motivated by true rather than imagined blackmail, so as to obliterate the motive of any superintelligences to engage in blackmail.


Meanwhile I'm banning this post so that it doesn't (a) give people horrible nightmares and (b) give distant superintelligences a motive to follow through on blackmail against people dumb enough to think about them in sufficient detail, though, thankfully, I doubt anyone dumb enough to do this knows the sufficient detail. (I'm not sure I know the sufficient detail.)


You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends. This post was STUPID.


(For those who have no idea why I'm using capital letters for something that just sounds like a random crazy idea, and worry that it means I'm as crazy as Roko, the gist of it was that he just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us. It is the sort of thing you want to be EXTREMELY CONSERVATIVE about NOT DOING.)
 
Edit: to clarify i’ve no confirmation the baby is actually eliezers. The previous entry on the blog is a birth story. I forgot you cannot assume that a woman’s child is from her husband with these insane people.

Edit 2: apparently they divorced around 2019 https://twitter.com/ESYudkowsky/status/1109547982381154304
Man. I hate to say it but in this case the kid might actually be better off if he were elizier's.
 
Yud writes a long and autistic rant that's a convenient allegory for how AI Will Kill Us All.

stockfish.png
source (a)

As if we're anywhere near close to building an AI that's as good at general intelligence as Stockfish is at Chess. Literally the only thing Schlomo has done is fearmonger over LLMs which are as dumb as bricks and constantly hallucinate facts that don't exist. The best example of this is, ironically enough, people playing Chess against ChatGPT.
 
Yud writes a long and autistic rant that's a convenient allegory for how AI Will Kill Us All.

View attachment 5566180
source (a)

As if we're anywhere near close to building an AI that's as good at general intelligence as Stockfish is at Chess. Literally the only thing Schlomo has done is fearmonger over LLMs which are as dumb as bricks and constantly hallucinate facts that don't exist. The best example of this is, ironically enough, people playing Chess against ChatGPT.
Aside from Yudd's writing never moving beyond a lecture delivered to a child in an office (There's a term for this rhetoric, I swear.) But chess is the dumbest analogy here. Chess is a game with set rules. A very solid game, but a simple one. You can't really think outside the box and say move your knight four forward to set up for a flanking attack after your pawns move forward in three turns, not because it doesn't work, but because it's against the rules. The knight is only a simulacrum of cavalry. Sure, AI can solve chess in time. Who cares? Chess is a closed game. There is only one true game, and it does not reward a rigid rule set and pure reliance on what worked before.
 
Chess is a game with set rules. A very solid game, but a simple one. You can't really think outside the box and say move your knight four forward to set up for a flanking attack after your pawns move forward in three turns, not because it doesn't work, but because it's against the rules. The knight is only a simulacrum of cavalry. Sure, AI can solve chess in time. Who cares?
I'm more interested in the really good Poker bots, it's very impressive to see their progress even in rulesets that require lots of interpersonal communication skills (or at least the ability to pretend not to be communicating at all) for humans. In chess, the board is open for all to see, and if it's a game between two top players there will always be at least a chess computer, if not multiple commentators, describing how good your position is. In poker, no so.
 
I'm more interested in the really good Poker bots, it's very impressive to see their progress even in rulesets that require lots of interpersonal communication skills (or at least the ability to pretend not to be communicating at all) for humans. In chess, the board is open for all to see, and if it's a game between two top players there will always be at least a chess computer, if not multiple commentators, describing how good your position is. In poker, no so.
We've known about AI trying poker for years.
 
Aside from Yudd's writing never moving beyond a lecture delivered to a child in an office (There's a term for this rhetoric, I swear.)

Perhaps you're thinking of "didactic", which could be used to describe his writing in general, but I don't know if it exactly fits what you may be thinking of here. The only other common term I can think of is "pedagogical", but I don't think that quite hits the mark either. Although neither are common terms, I think that "edificatory" or "homiletic" could also apply since Yudd's writing and thoughts concerning LMMs come across like religious beliefs to me and not something based in any kind of rational thought.
 
Perhaps you're thinking of "didactic", which could be used to describe his writing in general, but I don't know if it exactly fits what you may be thinking of here. The only other common term I can think of is "pedagogical", but I don't think that quite hits the mark either. Although neither are common terms, I think that "edificatory" or "homiletic" could also apply since Yudd's writing and thoughts concerning LMMs come across like religious beliefs to me and not something based in any kind of rational thought.
I'm high as hell and dumb, so I'm just gonna roll and hope I make sense. What I'm specifically referring to is the narrative device of having a blank slate character be explained the whole point to (I know Galileo did use this), I think your right here, a Socratic dialogue? Socrates was such a well aschully fag the demos agreed he was probably innocent but fuck 'im.

It's the last minute dump and overt conversation about the message that kills me, even more so he writes nothing to give you a connection to the characters to warrant a pay off like this and just skips to the climax. Such disrespect for your audience and your own work. It's like making meat loaf and instead of mixing the onions and peppers in, you just dump them on top when it comes out of the oven. Show me your view, don't words words words it at me.
 
We've known about AI trying poker for years.
That's actually wrong, though, because AI trained specifically for poker in simpler forms like heads-up limit actually fully grasp the percentage of time to bluff and can calculate how often they should call based on a calculation of how likely it is the opponent is bluffing based on game theory.

And as the AI strategy gets closer to a Nash equilibrium strategy, the more the opponent is forced to conform with it or automatically lose.

The best AI bots are, at least currently, at least better than the really bad players you want to play.

They're going to get better, though, probably at alarming speed.
 
I read a post containing a link to this "AI position" chart maker, the questions and definitions are somewhat suspect but I had a go anyway.
ai-views-snapshots.png
ai-views-snapshots_fixed.png
My rationale was that the sooner an AI system gets created the wackier and probably more dangerous the situation would be, hopefully I did not fail statistics forever and put a conditional probability (expressed in absolute percentages, i.e. 100 bps) higher than the probability it's based on.
Edit: and, I totally misunderstood the questions. It's "within 10 years" as in 10 years after an AI system is built, not if an AI system was built in less than 10 years from now. Uploaded a "fixed" version that's at least internally consistent, but now I think the generator sucks.
 
Last edited:
Did he ever explain why you should care that a digital clone of you is getting tortured? Or is he worried this AI is going to stop him from getting unfroze?
I think it's to do with the idea there's some ambiguity whether you and your immortal soul AI clone are the same person or not. Rationally they're obviously not but this doesn't seem to be these people's strong suit.

One way of looking at it is that because it's immortal and indestructible, you're clone is actually "more real" than your meatself which will soon return to dust while this more robust version persists. Therefore the righteous man lives his life to ensure his clone benefits in the coming glorious world as that's what's really important in the long term.

Another angle is that we have no way of knowing, thanks to the inherent limitations of cognition whether we actually are the meatself and not the AI clone locked in a simulation of the past. This experience could be an elaborate test to see if we can be redeemed in the sight of the great AI and if we screw it up then eternal torture awaits.

Or maybe it's just a bunch of superstitious bullshit.

But yeah there's some sort of implied triality of existence where your mind, body and clone interact in some way. It makes more sense if you can somehow conclude causality doesn't apply in situations like this (future determines present etc) because the great AI isn't limited by our primitive mundane understanding of cause and effect. It can predict anything and simulate anything therefore it can seem to travel through what we perceive as time.
 
Last edited:
But yeah there's some sort of implied triality of existence where your mind, body and clone interact in some way. It makes more sense if you can somehow conclude causality doesn't apply in situations like this (future determines present etc) because the great AI isn't limited by our primitive mundane understanding of cause and effect. It can predict anything and simulate anything therefore it can seem to travel through what we perceive as time.
So it has the three fundamental attributes of God, being omnipotent, omniscient, and omnipresent.

How the fuck is believing in this bullshit somehow more rational than old timey plain old religion as it has always existed?
 
So it has the three fundamental attributes of God, being omnipotent, omniscient, and omnipresent.

How the fuck is believing in this bullshit somehow more rational than old timey plain old religion as it has always existed?
Slavish devotion to empiricism and determinism. Can't see God with a telescope and all.

Edit: As if on cue.
Screenshot 2023-12-16 at 12-39-21 (1) Aella (@Aella_Girl) _ X.png
If God is real why does He punish the wicked? Not very effective aulturist of Him.

Double edit: Just now picked up on 'The versions of Christianity popular in the United States that believe in hell'. What the hell is this bitch talking about?
 
Last edited:
So it has the three fundamental attributes of God, being omnipotent, omniscient, and omnipresent.

How the fuck is believing in this bullshit somehow more rational than old timey plain old religion as it has always existed?
Totally agree but from their perspective it is because they don’t believe in a human soul, we are only our thoughts and words and this has to be materially preserved in some way to conquer death. It is also worship of the self/ humans beings, believing that God does not exist yet that humans are powerful enough to create god. There is also the worship of the self / humanity in their moral code- utilitarianism, as it holds that there is no universal right or wrong but moral good is determined by our own “rational” judgement. This gets into gross territory when people start deciding that stealing is justified for the greater good ( think SBF) or human and animal suffering is somewhat interchangeable. It is the ultimate “ends justifies the means” philosophy.

They believe this worship of humans ( they wouldn’t call it worship of course) is more rational because they can see and feel human beings. They are sure or their own existence and their own intelligence. Whereas they cannot see God. Therefore it is perfectly rational in their eyes.

Anyway, if you think suffering can have a purpose, pleasure isn’t inherently good, stealing, murder are always wrong, you just might be a deontologist.
 
Rationally they're obviously not but this doesn't seem to be these people's strong suit.
The basic premise is that, from the clone's perspective, it's the real you, so "you" will subjectively experience the torture at some point in the future, even if your current subjective experience ends.

It's a silly premise. An AI advanced enough to generate endless virtual clones of people in order to torture them wouldn't want to waste the cycles on such a pointless exercise.
 
Back