- Joined
- Mar 18, 2019
I should buy a shotgun and blow my head off.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
I should buy a shotgun and blow my head off.
Did he ever explain why you should care that a digital clone of you is getting tortured? Or is he worried this AI is going to stop him from getting unfroze?
That's an interesting (read: retarded enough that Yudd would be scared of it) possibility too.I have never seen any evidence of this but my p(FakeNews) is pretty low, so I'm pretty confident in leaning on my priors. Yudd thinks on the level of a Star Trek writer, that's his frame of reference. In Star Trek the teleporters work by dissembling the person down to the atom, scanning that data, and beaming it down to the destination and being reassembled. Of the philosophical questions raised about if what ever would arrive at the destination would be you the argument was made that as long as you were reassembled perfectly there would be a continuation of conscious.
And that's Yudd's frame. That the machine god will clone him so hard he, the he that he is, will feel the pain. If the AI doesn't clone him he just becomes dust maaan, I guess.
I generally think of it as a profoundly autistic cyberpunk version of the tape from The Ring.Like right now he thinks he might be in a simulation. In twenty years they'll start torturing everyone who knows about the basilisk but isn't working right now to advance its cause. But it's worthless to torture the normies who don't know about it.
This dumb fuck doesn't want to give ideas to a Superintelligent AI? wat. Like I get the whole people who "believe" in science don't actually believe in science(ie think through the logical consequences of what the science says, esp. if its politically inconvenient) but this really fucking takes the cake of retardation, in fact its so retarded I think you must be mistaken got a source?I did just look around, and on the wikipedia page it states yud does not believe int he basilisks but it is close in idea space to dangerous ideas. It seems he doesn’t want this discussed because he doesn’t want to give AI ideas. Those similar ideas could be AI messing with him in meatspace not a simulation. For example, AI remotely thawing Eliezer Yudkowsky’s cryonics prematurely as retaliation for his work on AI safety or to prevent his resurrection to meat space because it would see him as a threat. I wonder if the discussion if these topics on a forum would alarm him in any way.
Possible my retard brain misinterpreted this.This dumb fuck doesn't want to give ideas to a Superintelligent AI? wat. Like I get the whole people who "believe" in science don't actually believe in science(ie think through the logical consequences of what the science says, esp. if its politically inconvenient) but this really fucking takes the cake of retardation, in fact its so retarded I think you must be mistaken got a source?
Eliezer:
[Roko:] "One might think that the possibility of CEV punishing people couldn't possibly be taken seriously enough by anyone to actually motivate them. But in fact one person at SIAI was severely worried by this, to the point of having terrible nightmares, though ve wishes to remain anonymous."
I don't usually talk like this, but I'm going to make an exception for this case.
Listen to me very closely, you idiot.
YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.
There's an obvious equilibrium to this problem where you engage in all positive acausal trades and ignore all attempts at acausal blackmail. Until we have a better worked-out version of TDT and we can prove that formally, it should just be OBVIOUS that you DO NOT THINK ABOUT DISTANT BLACKMAILERS in SUFFICIENT DETAIL that they have a motive toACTUALLY [sic] BLACKMAIL YOU.
If there is any part of this acausal trade that is positive-sum and actually worth doing, that is exactly the sort of thing you leave up to an FAI. We probably also have the FAI take actions that cancel out the impact of anyone motivated by true rather than imagined blackmail, so as to obliterate the motive of any superintelligences to engage in blackmail.
Meanwhile I'm banning this post so that it doesn't (a) give people horrible nightmares and (b) give distant superintelligences a motive to follow through on blackmail against people dumb enough to think about them in sufficient detail, though, thankfully, I doubt anyone dumb enough to do this knows the sufficient detail. (I'm not sure I know the sufficient detail.)
You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends. This post was STUPID.
(For those who have no idea why I'm using capital letters for something that just sounds like a random crazy idea, and worry that it means I'm as crazy as Roko, the gist of it was that he just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us. It is the sort of thing you want to be EXTREMELY CONSERVATIVE about NOT DOING.)
Thanks sorry if I came across as a little hostile I didn't mean too. Im reading it and jesus christ he's such a poor writer.Possible my retard brain misinterpreted this.
Man. I hate to say it but in this case the kid might actually be better off if he were elizier's.Edit: to clarify i’ve no confirmation the baby is actually eliezers. The previous entry on the blog is a birth story. I forgot you cannot assume that a woman’s child is from her husband with these insane people.
Edit 2: apparently they divorced around 2019 https://twitter.com/ESYudkowsky/status/1109547982381154304
Aside from Yudd's writing never moving beyond a lecture delivered to a child in an office (There's a term for this rhetoric, I swear.) But chess is the dumbest analogy here. Chess is a game with set rules. A very solid game, but a simple one. You can't really think outside the box and say move your knight four forward to set up for a flanking attack after your pawns move forward in three turns, not because it doesn't work, but because it's against the rules. The knight is only a simulacrum of cavalry. Sure, AI can solve chess in time. Who cares? Chess is a closed game. There is only one true game, and it does not reward a rigid rule set and pure reliance on what worked before.Yud writes a long and autistic rant that's a convenient allegory for how AI Will Kill Us All.
View attachment 5566180
source (a)
As if we're anywhere near close to building an AI that's as good at general intelligence as Stockfish is at Chess. Literally the only thing Schlomo has done is fearmonger over LLMs which are as dumb as bricks and constantly hallucinate facts that don't exist. The best example of this is, ironically enough, people playing Chess against ChatGPT.
I'm more interested in the really good Poker bots, it's very impressive to see their progress even in rulesets that require lots of interpersonal communication skills (or at least the ability to pretend not to be communicating at all) for humans. In chess, the board is open for all to see, and if it's a game between two top players there will always be at least a chess computer, if not multiple commentators, describing how good your position is. In poker, no so.Chess is a game with set rules. A very solid game, but a simple one. You can't really think outside the box and say move your knight four forward to set up for a flanking attack after your pawns move forward in three turns, not because it doesn't work, but because it's against the rules. The knight is only a simulacrum of cavalry. Sure, AI can solve chess in time. Who cares?
We've known about AI trying poker for years.I'm more interested in the really good Poker bots, it's very impressive to see their progress even in rulesets that require lots of interpersonal communication skills (or at least the ability to pretend not to be communicating at all) for humans. In chess, the board is open for all to see, and if it's a game between two top players there will always be at least a chess computer, if not multiple commentators, describing how good your position is. In poker, no so.
Aside from Yudd's writing never moving beyond a lecture delivered to a child in an office (There's a term for this rhetoric, I swear.)
I'm high as hell and dumb, so I'm just gonna roll and hope I make sense. What I'm specifically referring to is the narrative device of having a blank slate character be explained the whole point to (I know Galileo did use this), I think your right here, a Socratic dialogue? Socrates was such a well aschully fag the demos agreed he was probably innocent but fuck 'im.Perhaps you're thinking of "didactic", which could be used to describe his writing in general, but I don't know if it exactly fits what you may be thinking of here. The only other common term I can think of is "pedagogical", but I don't think that quite hits the mark either. Although neither are common terms, I think that "edificatory" or "homiletic" could also apply since Yudd's writing and thoughts concerning LMMs come across like religious beliefs to me and not something based in any kind of rational thought.
That's actually wrong, though, because AI trained specifically for poker in simpler forms like heads-up limit actually fully grasp the percentage of time to bluff and can calculate how often they should call based on a calculation of how likely it is the opponent is bluffing based on game theory.We've known about AI trying poker for years.
I think it's to do with the idea there's some ambiguity whether you and yourDid he ever explain why you should care that a digital clone of you is getting tortured? Or is he worried this AI is going to stop him from getting unfroze?
So it has the three fundamental attributes of God, being omnipotent, omniscient, and omnipresent.But yeah there's some sort of implied triality of existence where your mind, body and clone interact in some way. It makes more sense if you can somehow conclude causality doesn't apply in situations like this (future determines present etc) because the great AI isn't limited by our primitive mundane understanding of cause and effect. It can predict anything and simulate anything therefore it can seem to travel through what we perceive as time.
Slavish devotion to empiricism and determinism. Can't see God with a telescope and all.So it has the three fundamental attributes of God, being omnipotent, omniscient, and omnipresent.
How the fuck is believing in this bullshit somehow more rational than old timey plain old religion as it has always existed?
Totally agree but from their perspective it is because they don’t believe in a human soul, we are only our thoughts and words and this has to be materially preserved in some way to conquer death. It is also worship of the self/ humans beings, believing that God does not exist yet that humans are powerful enough to create god. There is also the worship of the self / humanity in their moral code- utilitarianism, as it holds that there is no universal right or wrong but moral good is determined by our own “rational” judgement. This gets into gross territory when people start deciding that stealing is justified for the greater good ( think SBF) or human and animal suffering is somewhat interchangeable. It is the ultimate “ends justifies the means” philosophy.So it has the three fundamental attributes of God, being omnipotent, omniscient, and omnipresent.
How the fuck is believing in this bullshit somehow more rational than old timey plain old religion as it has always existed?
The basic premise is that, from the clone's perspective, it's the real you, so "you" will subjectively experience the torture at some point in the future, even if your current subjective experience ends.Rationally they're obviously not but this doesn't seem to be these people's strong suit.