Eliezer Schlomo Yudkowsky / LessWrong

Yudkowsky finally comes out and says what he expects to happen with AI if his weird theories are wrong.
E. S. Yudkowsky said:
This does not mean that the story for how superintelligences running around our Solar System, destroy humanity as a side effect, would somehow be prevented by lack of cooperation among superintelligences. They intercept all the sunlight for power generation, humanity dies in the dark. They generate enough energy, humanity burns in the heat.
Unfortunately I can't figure out where this was originally posted, probably Twitter but I don't have an account. It's not fake though.
 
This is not the first universe. The previous universe wasn't created by God, but one being there achieved godlike power, destroyed its universe and then created this one.
Given things like quantisation and other runtime optimisations, it is obvious that this universe is a simulation. Probably started within another universe that is a simulation. About six or seven iterations of simulated universes above you get to the prime universe that obviously has vastly different physics compared to ours, and where questions of "beginning", "end", and "who created what" are nonsensical.

Ironically, this is less wrong than what Schlomo usually babbles.
 

Attachments

Back
Top Bottom