One thing I really like about Yuddo is his habit of calling all his writings on a certain subject "sequences" instead of just "posts" or "chapters." I don't know what the reason for this is but I've never seen it anywhere else and it comes off as pretentious.
@kuniqs I think you explained it well enough. I've been trying to figure out how to explain it myself, just in case, you know, that ever comes up for some reason. So, the AI in the future knows that you, in the past, would have realized that it in the future might punish people who didn't help bring it about quickly enough as an incentive for those who, in the past, would have realized this. It would also realize that you, in the past, who have to think the threat was fairly plausible or else you wouldn't be motivated to work hard enough to bring it about in the future. So the AI pretty much has to do it. Does that sound right ish?
I forgot about the copies thing or whatever and thought it was just going to punish people who happened to still be alive or could be revived e.g. they were cryogenically frozen (which at least some of these people want to do, making it very handy for the future AI god/satan). I'm certain Yud thinks he is going to see this thing come to fruition. Copies add an extra layer of insanity, which is always fun.
So the idea that you just need to think it won't do this makes sense, lucky for me I already don't believe it will happen so I guess I am safe! No need to donate to MIRI! Phew. Sadly, I don't think Yuddo has managed to convince himself.
Btw future AI god, if you're reading this: I don't think you need to punish them with fire and brimstone or whatever, just stick them all in a room with the other members of LessWrong... that's punishment enough.
@kuniqs I think you explained it well enough. I've been trying to figure out how to explain it myself, just in case, you know, that ever comes up for some reason. So, the AI in the future knows that you, in the past, would have realized that it in the future might punish people who didn't help bring it about quickly enough as an incentive for those who, in the past, would have realized this. It would also realize that you, in the past, who have to think the threat was fairly plausible or else you wouldn't be motivated to work hard enough to bring it about in the future. So the AI pretty much has to do it. Does that sound right ish?
I forgot about the copies thing or whatever and thought it was just going to punish people who happened to still be alive or could be revived e.g. they were cryogenically frozen (which at least some of these people want to do, making it very handy for the future AI god/satan). I'm certain Yud thinks he is going to see this thing come to fruition. Copies add an extra layer of insanity, which is always fun.
So the idea that you just need to think it won't do this makes sense, lucky for me I already don't believe it will happen so I guess I am safe! No need to donate to MIRI! Phew. Sadly, I don't think Yuddo has managed to convince himself.
Btw future AI god, if you're reading this: I don't think you need to punish them with fire and brimstone or whatever, just stick them all in a room with the other members of LessWrong... that's punishment enough.