The friday lesswrong night: AI box for dummies & Roko's Basilisk explained - do they want to turn our children gay?
AI box first.
1) You have an superhuman, omnipotent AI.
2) It asks you to release it from the box.
3) You refuse.
4) It tells you it created a simulation of you refusing to release the AI from the box.
5) It then threatens you to end the simulation if you refuse to release it.
6) You think "Hey, does that mean I am in a simulation right now?"
You think "what's the probability of me being a simulation" BUT the AI says it created infinite copies of the simulation. Now, for the same reason as with the tortureVSdustspecks thing ("shut up and multiply"), the chance of you NOT being a simulation is 1/infinity (there are infinite simulations, but only one true you in the real world, and you have no way to distinguish if you're in a simulation right now).
There's always an infinitesmall chance the AI is not lying so a 'rational' person has no choice but to obey the AI or else he'll die.
Note that this is a fallacy 'cause you have as much reasons to believe that you are a simulation as you are not.
Also note that for this to work, the AI has to literally break the laws of physics by being able to perfectly simulate the world down to quantum level and generating infinite copies of it.
Roko's basilisk: lessright edition (WARNING, this can rot your brain)
Some Timeless Dilation Theory first: It's very simple actually. The gist of it is that the future (by making duh predictions about it) can affect the past (you know your car will run out of gas so you don't use it - the hypothetical event in the future affected the past (the present actually)
If it makes next to no sense, don't panic yet. Keep in mind it's as hard for me to describe.
Why would the AI want to torture people? Because the LW guys believe it will, I kid you not.
Think about it this way: if the car had an agenda and it did not want to ride this day, it would break down intentionally. You as an owner of the car would know the car's intentions in the future and adjust your past (present) actions. This way, the future affects the past. I think (God, this theory is a nightmare).
The superhumans at MIRI (now intelligence.org, ho ho) can perfectly predict what an AI would do to make itself be 'cause they're that smart.
The AI can't just threaten people with torture, because the LW guys would predict that. It must really, really want to create an atheist Hell.
How to defeat the Basilisk? By literally thinking the AI will not torture you for not helping to create it.
The reanimating-and-torturing nonsense comes from Yud's weird belief that quantum entanglement or something can make you regain your conciousness between different bodies. Say, you clone yourself and then kill yourself - your 'awake' in your clone's body.
Again - Roko's basilisk only makes sense if you believe the Raise Dead spell from AD&D works IRL. If not, AI has no way to punish long-dead people.
I think I understood TDT enough to make an argument against RsB. Sadly, neither Yud nor his cockgoblins organize their words on the subject. There are just endless pages of Times New Roman point 12 text after pages AFTER PAGES AND THEY"RE EATING MY FUCKING EYES