Eliezer Schlomo Yudkowsky / LessWrong

He's an idiot who's under the delusion he's smart, and he's got a gaggle of followers who are similarly not that bright but think they're on the cutting edge.

He's reasonably smart and (some) of his ideas would be at least reasonable, in the context of transhumanist bibble-babble at least, if presented as general ideas in the context of just being that, speculative ideas.

What's laughable is his attempt to dress up these ideas in pseudo-academic jargon and then present them as if they could pass muster and be published in a peer-reviewed journal, then attempting to submit them to just that, then crying sour grapes when the paper universally fails, then self-publishing it in a bullshit journal he created himself.

There's also the fact that when he goes from general ideas to the specific, he makes proposals that people who actually work in AI say have no technical merit whatsoever.

That's where he goes from being a speculative science popularizer to being a humbug.
 
Here's Less Wrong having an insightful discussion on Effects of Castration on the Life Expectancy of Contemporary Men.

Posted by "Fluttershy".

*rationally and bayesianally amputates my own balls*

A better sense of rationality would be to get off your ass and do some exercise instead of sperging about getting a robot waifu due to the singularity event that may or may not happen ever depending on how our development of computers and robotics go. Or go out and make friends/have something to take care of. Both of those also make you live longer too, and don't make you look like an idiot too.
 
Spite maybe? It's the closest I can think, and even THAT is monumentally stupid.

But would that include the people who grew the food that fed the scientists or whatever? Or Steve the caretaker?

All in all it's just the Technosperg version of the Rapture: those that do not follow & support our special ideals will be punished.

Lesswrongers COULD end a civil war; all the other factions might team up to destroy them.
 
I'm trying to remember; I think the AI is supposed to either do it because it's angry it's not been brought into existence sooner; I'm not sure why it would care though. More likely it's because Yudkowski thinks that the AI can solve suffering on an unprecedented scale, so probably the logic is that the AI is angry that people who knew about it and what it could do didn't create it sooner and therefore didn't stop a massive amount of human suffering. Yudkowski's plan, if I'm remembering correctly, is to "solve" logic because the computer AI wouldn't have a concept of human emotion, but could possibly be logically convinced not to destroy humanity and in fact help it. Therefore if all this happens correctly, which it won't, it would be possible to have an AI that's at once the savior of humanity but also horrifically torturing a bunch of people for not helping it save MORE people.
Also, it doesn't actually go back in time... it's a mindfuck.
What I don't get is why Yudkowski is allowed to speak at Atheist cons. He's invented his own heaven, hell, God.. imo he has no more business there than L Ron Hubbard would, were he still alive.
 
What I don't get is why Yudkowski is allowed to speak at Atheist cons. He's invented his own heaven, hell, God.. imo he has no more business there than L Ron Hubbard would, were he still alive.

Kurzweil is taken seriously among atheist types and the Singularity is essentially the Rapture with robots.
 
Whatever may be said about the plausibility of this basilisk, I believe the kind of AI fanboyism behind this "Singularity" nonsense suffers from a severe overestimation of what computers can do.

Again, I am no computer scientist. But I did take an Animal Cognition course as part of my Bio-Anthropology major at UCSD. What our professor emphasized on the very first lecture was that "cognitive" behavior (that is, what most would recognize as intelligent) is distinguished by its flexibility, meaning it's behavior not pre-programmed into an animal's instincts. No matter how elaborate a behavior may appear, it doesn't count as truly cognitive if it's all encoded in instinct. Which, incidentally, is what almost all computer behavior is.

In this light, everything we call "artificial intelligence" would be more accurately called artificial instinct. It can only do what its programming instructs, rather than learn and modify its behavior the way a human (or most other animals with brains) can. The kind of cognitive adaptability that made it possible for humans to invent all that technology leading up to computers is absent in computers themselves. So I don't see AI as we know it taking over the world and enslaving humanity.

Might we invent a computer that has truly humanlike intelligence. I wouldn't rule it out completely, but I'm not sure many people would honestly want such a project. Computers and robots work just fine as tools of convenience.
 
There is a post somewhere on LessWrong (Schlomo's very own hugbox) where he is downvoted to invisibility. He says "if you don't believe MWI after reading the sequences you must be stupid." Even his cult couldn't buy it.

In ten years, Yud's face will be in Psychology textbooks next to the definition of 'Illusory superiority"
 
The friday lesswrong night: AI box for dummies & Roko's Basilisk explained - do they want to turn our children gay?




AI box first.


1) You have an superhuman, omnipotent AI.
2) It asks you to release it from the box.
3) You refuse.
4) It tells you it created a simulation of you refusing to release the AI from the box.
5) It then threatens you to end the simulation if you refuse to release it.
6) You think "Hey, does that mean I am in a simulation right now?"

You think "what's the probability of me being a simulation" BUT the AI says it created infinite copies of the simulation. Now, for the same reason as with the tortureVSdustspecks thing ("shut up and multiply"), the chance of you NOT being a simulation is 1/infinity (there are infinite simulations, but only one true you in the real world, and you have no way to distinguish if you're in a simulation right now).

There's always an infinitesmall chance the AI is not lying so a 'rational' person has no choice but to obey the AI or else he'll die.

Note that this is a fallacy 'cause you have as much reasons to believe that you are a simulation as you are not.
Also note that for this to work, the AI has to literally break the laws of physics by being able to perfectly simulate the world down to quantum level and generating infinite copies of it.




Roko's basilisk: lessright edition (WARNING, this can rot your brain)

Some Timeless Dilation Theory first: It's very simple actually. The gist of it is that the future (by making duh predictions about it) can affect the past (you know your car will run out of gas so you don't use it - the hypothetical event in the future affected the past (the present actually)

If it makes next to no sense, don't panic yet. Keep in mind it's as hard for me to describe.

Why would the AI want to torture people? Because the LW guys believe it will, I kid you not.

Think about it this way: if the car had an agenda and it did not want to ride this day, it would break down intentionally. You as an owner of the car would know the car's intentions in the future and adjust your past (present) actions. This way, the future affects the past. I think (God, this theory is a nightmare).

The superhumans at MIRI (now intelligence.org, ho ho) can perfectly predict what an AI would do to make itself be 'cause they're that smart.

The AI can't just threaten people with torture, because the LW guys would predict that. It must really, really want to create an atheist Hell.

How to defeat the Basilisk? By literally thinking the AI will not torture you for not helping to create it.

The reanimating-and-torturing nonsense comes from Yud's weird belief that quantum entanglement or something can make you regain your conciousness between different bodies. Say, you clone yourself and then kill yourself - your 'awake' in your clone's body.

Again - Roko's basilisk only makes sense if you believe the Raise Dead spell from AD&D works IRL. If not, AI has no way to punish long-dead people.

I think I understood TDT enough to make an argument against RsB. Sadly, neither Yud nor his cockgoblins organize their words on the subject. There are just endless pages of Times New Roman point 12 text after pages AFTER PAGES AND THEY"RE EATING MY FUCKING EYES
 
The friday lesswrong night: AI box for dummies & Roko's Basilisk explained - do they want to turn our children gay?

Excellent analysis of how the Roko's Basilisk scenario takes the already pants-on-head retarded infinite-universes "lol ur a simluation" fantasy and makes it exponentially dumber.

Much of LW is Insane Troll Logic presented as gospel, with the main reason anyone agrees with it being that everyone else including the cult leader seems to buy it.

ETA: but in one way, Roko's Basilisk is much cooler, and that is that it caused Yuddo the Clown to go into a pants-shitting thread-deleting tard-rage.
 
Back