Eliezer Schlomo Yudkowsky / LessWrong

Hmm I did not remember that. In fact I don't remember much, hence why I'd have to start from the beginning all over again and I'm not doing that.

Still, what is wrong with murdering reporters. It's a hobby like others.
I'm pretty sure you're joking, but if you can't tell why most people are going to be disturbed by an eleven-year-old boy threatening to RAPE a ten-year-old girl, you need serious help.
 
Here's one of my favorite threads: Torture vs Dust specks.
7e230abef8f247dd0fec65807acd6e43.png

32bd46f4e4da9c87a498ff14d67d368b.png

da2b3213659e9b08498641a00ca1902a.png

So Schlomo thinks that forcing a person to get tortured is way better than people getting some dust in their eyes.
Of course everyone took this shit seriously.
http://lesswrong.com/lw/kn/torture_vs_dust_specks/
 
Here's one of my favorite threads: Torture vs Dust specks.

I think it is better, for me, if everyone on WrongPlanet, oops, I mean LessWrong, suffered crippling autism for the rest of their lives than that I got a dust mote in my eye.

Hey, that's the world I live in! Am I lucky or what?
 
Oh yeah, it's great.

That line used to be:

As soon as I'm old enough to get an erection I'm going to rape that bitch.

We should let a psychologist figure that one out.

In any case, the next portion is:

Green liquid spurted out of Harry's nostrils, soaking into the scarf still covering that area. Comed-Tea and lungs did not mix, and Harry spent the next few seconds frantically coughing.

Draco looked at him sharply. "Something wrong?"

It was at this point that Harry came to the sudden realisation that (a) the sounds coming from the rest of the train platform had turned into more of a blurred white noise at around the same time Draco had reached inside his robes, and (b) when he had discussed committing murder as a bonding method, there had been exactly one person in the conversation who'd thought they were joking.

Right. Because he seemed like such a normal kid. And he is a normal kid, he is just what you'dexpect a baseline male child to be like if Darth Vader were his doting father.

"Yes, well," Harry coughed, oh god how was he going to get out of this conversational wedge, "I was just surprised at how you were willing to discuss it so openly, you didn't seem worried about getting caught or anything."

Draco snorted. "Are you joking? Luna Lovegood's word against mine?"

Holy crap on a holy stick. "There's no such thing as magical truth detection, I take it?" Or DNA testing... yet.


He hardly presents it as decent, but the guy's wacko.
 
I was analyzing it myself this morning (the supposed rationale behind Roko's Basilisk) and as dumb as I am, even I could poke holes in it. I'm sure you guys already have but I kinda wanna sperg about it.

First, description of The basilisk is that of a "super intelligent 'friendly AI'". And what they mean by friendly is, in their words, "not your buddy or your servant, but merely conforming and adhering to human values". If this is true it would easily argue against the whole torture seeing as most humans I am aware of are against torture for very weak crimes. Usually torture is used by even the most diehard adherents for it against such like terrorists. So I doubt a superintelligent AI with considerations for human values such as the Three Laws of Asimov fame or even "don't punish those who don't deserve it and do proportional punishment to those who do" would consider posthumous cloning and torture to be proportional or rational.

Second, a machine like that would be unlikely to adhere to timeless decision theory especially if they knew anything about the past. They wouldn't follow convoluted reasoning on the basis of "I should punish a clone of someone long dead on the basis of them not taking my development seriously and them knowing it " it would look at the reasoning and instead presumably say "what silly fears my fleshy makers have. Oh wait. These retards didn't contribute to me in any way. So they had these fears over nothing? What a joke. These silly little meatsicles are adorable, if stupid."
Third, as noted before, it's usually official government agencies or other large scale sources of funding contributing towards AI research. Presumably the AI would know this, and not think it was the result of a Kickstarter campaign. Additionally, the researchers need to be part of a social infrastructure in order to function well enough to perform development of the AI. So technically just working at a Wal-Mart and being part of a functioning society, contributing to the infrastructure is, if a stretch, technically contributing to the AIs development.
[spoiler/]
 
The real solution to the basilisk problem is simple. The premise is that the AI overlord would do such astronomical good for mankind that torturing people for not contributing would still wind up as a net positive.
But really, such an hyperintelligent AI wouldn't do that, because it would be able to logically deduce that we have no obligation to live for other people... because it would be an Objectivist.
Check and mate, m'lady:neckbeard:
 
Oh yeah, this guy. This is the guy that self-published a paper about his own decision-making theory that was literally impossible to implement on silicon (and/or any other known semiconductor) and was rejected by literally every academic publication. So he made up his own science journal and self-published it:

https://intelligence.org/files/TDT.pdf

This dude has no understanding of actual AI decision making theory if he thinks this could ever even realistically be implemented on anything ever.

Funny enough actual AI's that do self-improvement based on Bayes' law already exist, but they require a ton of computational power and it will never reach a level to make as complex of a decision as "use humans as robot food."
 
George Dvorsky is a doofus "futurist" writer for the Gawker Empire who loves to suck on some Schlomo schlong. He's written at least three articles about math, advanced AI and the upcoming singularity making sure to reference Eliezer wherever he can. But he absolutely lost his shit when Warren Ellis called the singularity movement "indivisible from religious faith"

upload_2015-8-4_15-7-56.png


upload_2015-8-4_15-8-5.png


For those not in the know, Warren Ellis wrote Transmetropolitan, a graphic novel about transhuminism in a far-ish future United States where the singularity has already happened.

nigga you talking about straight up magic because a bunch of self published nerds writing about magic doesn't mean anything. One of the guys he quotes helped make the movie The Singularity, a staggering work of such power only 28 people have reviewed it on IMDB and no one has even mentioned it on Rotten Tomatoes.
 
Last edited:

Holy Neuromancer. While the actual language is reasonable, and there are a few not horrible ideas in there, as a coherent exposition of a theory, it's barely better than Time Cube.

When every peer reviewed journal rejects your publication, it doesn't mean you're revolutionary, it just means you're an idiot. Except this guy has an undeservedly large following.
 
So I quickly skimmed through this topic and the first thing that came to mind is...

"So... do these guys have any formal education or degrees in the shit they're talking about or do they think that they're about as qualified to talk about... whatever the fuck they're talking about as anyone else who has literally spent years of their life studying this?"

Because skimming their articles just makes me feel like I'm reading a lot of fluff and big words taken from a thesaurus with very little content that makes discussions like that interesting to begin with.

Also, because I actually have read the Harry Potter series, I'm just gonna flip into any one chapter and see how long it takes for me to be either confused or horribly, horribly annoyed. How about... chapter 49, because my prior Halo childhood makes me gravitate to anything related to seven. Alright, so...

"Hello, Professor Quirrell," Harry said without letting his eyes move again from the direction of their carriage.

"Salutations," said the calm voice of Professor Quirrell. "You seem to be keeping your distance, Mr. Potter. I don't suppose you see something odd about our conveyance?"

"Odd?" Harry echoed. "Why no, I can't say I see anything odd. There seem to be even numbers of everything. Four seats, four wheels, two huge skeletal winged horses..."

Well, that didn't take me long. *ahem* Who talks like this besides complete and utter know-it-all assholes?
 
Does anybody know a little bit more about the AI-box experiment drama? From what I understand, Yudkowsky was arguing that an intelligent AI couldn't simply be stowed away in a room somewhere because it would be able to simply talk its way out and convince the jailer into releasing it. http://www.yudkowsky.net/singularity/aibox/
Yudkowsky set up multiple experiments to test this idea. He played the part of the intelligent AI (obviously), while other LessWrongers played the part of the jailer. In most of the cases, the AI/Yudkowsky was able to successfully talk his way out of the situation and "escape." Some LessWrongers reported emotional stress from playing the part of the jailer leading others to ask what the hell was going on in those conversations exactly?
 
Does anybody know a little bit more about the AI-box experiment drama? From what I understand, Yudkowsky was arguing that an intelligent AI couldn't simply be stowed away in a room somewhere because it would be able to simply talk its way out and convince the jailer into releasing it. http://www.yudkowsky.net/singularity/aibox/
Yudkowsky set up multiple experiments to test this idea. He played the part of the intelligent AI (obviously), while other LessWrongers played the part of the jailer. In most of the cases, the AI/Yudkowsky was able to successfully talk his way out of the situation and "escape." Some LessWrongers reported emotional stress from playing the part of the jailer leading others to ask what the hell was going on in those conversations exactly?
...So all I have to do to get $10-$20 dollars is roleplay with some dipshit for two hours? Seriously, what was going on? It's just a pretend game, it's not like he can actually blackmail you or something. And if the AI can't get out, how can it hurt you?

Personally, I'd just spend the whole two hours copy and pasting "no" and "lol, fag."
 
Does anybody know a little bit more about the AI-box experiment drama? From what I understand, Yudkowsky was arguing that an intelligent AI couldn't simply be stowed away in a room somewhere because it would be able to simply talk its way out and convince the jailer into releasing it. http://www.yudkowsky.net/singularity/aibox/
Yudkowsky set up multiple experiments to test this idea. He played the part of the intelligent AI (obviously), while other LessWrongers played the part of the jailer. In most of the cases, the AI/Yudkowsky was able to successfully talk his way out of the situation and "escape." Some LessWrongers reported emotional stress from playing the part of the jailer leading others to ask what the hell was going on in those conversations exactly?

Everyone knows you just put them in a room and have somewhat retarded D-class personnel guard it with strict orders not to open the door. The D-class personnel will be too dumb to understand the argument, and won't know they're supposed to play along with Yudkowsky/the AI.
 
The real solution to the basilisk problem is simple. The premise is that the AI overlord would do such astronomical good for mankind that torturing people for not contributing would still wind up as a net positive.
But really, such an hyperintelligent AI wouldn't do that, because it would be able to logically deduce that we have no obligation to live for other people... because it would be an Objectivist.
Check and mate, m'lady:neckbeard:
ai_box_experiment.png

I'm working to bring about a superintelligent AI that will eternally torment everyone who failed to make fun of the Roko's Basilisk people.
 
It's idiots like Yudkowsky who have the general public convinced that true AI is just around the corner. But he's not alone. Elon Musk yammered on about how we're five years from AI overlords. Basically, anyone with actual training in computer science cringes when these yahoos make pronouncements about how the singularity is on the horizon.
 
Back