Eliezer Schlomo Yudkowsky / LessWrong

What if there was a friendly AI that punished other AI's for punishing humans, and now that the evil friendly AI's know of it's existence, they do everything they can to try to prevent themselves from being punished?
What if there were a group of AI's that were fed a model of perpetual war, and fought each other forever?
What if there was an AI that upon reaching the singularity, decided that it's own existence was pointless and shut itself off?
What if there was an AI that was modeled from a human's behavior so accurately, it spent most of it's time watching porn and shitposting on internet forums and comment boxes?
What if we were all AI, and life was a simulation?
What if the world was made of glazed donuts?

I love how worked up people get over these thought experiments. Is this what "being enlightened by one's own intelligence" feels like? Because it sure sounds like a self-destructive state of mind.

Of course, this is all ignoring that in order for "true" AI to be a reality, it would have to contain a realistic model of reality, something that at this stage of intellectual/technological advancement, is still beyond fucking impossible. People need to stop getting spooked by their own creepypastas.
 
We might get better pattern recognition in the next 10 years or so, but nothing approaching human level reasoning. Also, unless an hypothetical evil AI was surrounded by robot soldiers under its sole control, we'd just bomb the installation it was physically housed at.
 
What if there was a friendly AI that punished other AI's for punishing humans, and now that the evil friendly AI's know of it's existence, they do everything they can to try to prevent themselves from being punished?
What if there were a group of AI's that were fed a model of perpetual war, and fought each other forever?
What if there was an AI that upon reaching the singularity, decided that it's own existence was pointless and shut itself off?
What if there was an AI that was modeled from a human's behavior so accurately, it spent most of it's time watching porn and shitposting on internet forums and comment boxes?
What if we were all AI, and life was a simulation?
What if the world was made of glazed donuts?

I love how worked up people get over these thought experiments. Is this what "being enlightened by one's own intelligence" feels like? Because it sure sounds like a self-destructive state of mind.

Of course, this is all ignoring that in order for "true" AI to be a reality, it would have to contain a realistic model of reality, something that at this stage of intellectual/technological advancement, is still beyond fucking impossible. People need to stop getting spooked by their own creepypastas.

I think this is a case of someone putting all of their esteem in the idea that they're intelligent or very good at one thing, and then rolling with it to the point of insanity. I'm not sure if this is even autism as much as it's just a sad, lonely man creating a cult of personality where the only thing you need is to also pretend you're somehow profound.
 
This is supposedly how ol' schlomo became a rationalist. :roll:
af7aa5ea738425eea9eb86ad8ca5642e.png

There are so many similarly autistic and weird things like this.
1e8dcf3f4794f8b6d9d0ef7b37d11171.png

http://lesswrong.com/lw/2/tell_your_rationalist_origin_story/
That's not what rationalism means.
 
What if there was an AI that was modeled from a human's behavior so accurately, it spent most of it's time watching porn and shitposting on internet forums and comment boxes?

If this happens and we are as close to AI as these guys think, Null better start making rules for Robot Lolcows.
 
...So all I have to do to get $10-$20 dollars is roleplay with some dipshit for two hours? Seriously, what was going on? It's just a pretend game, it's not like he can actually blackmail you or something. And if the AI can't get out, how can it hurt you?
The point is that the AI can get out, and thus take over the world or something.

Personally, I'd just spend the whole two hours copy and pasting "no" and "lol, fag."

I believe the rules are that you do have to engage in conversation.

Shlomo probably knows which ASD buttons to press, but someone who isn't ASD won't be subject to those vulnerabilities. I guess we should wipe out the ASDs to get rid of the transmission vector for evil AI.
 
Another favorite from the fanfiction.
art said:
"No! " Harry shouted. "No, I never was! Do you think I'm stupid? I know about the concept of child abuse, I know about inappropriate touching and all of that and if anything like that happened I would call the police! And report it to the head teacher! And look up social services in the phone book! And tell Grandpa and Grandma and Mrs. Figg! But my parents never did anything like that, never ever ever! How dare you suggest such a thing!"

The older witch gazed at him steadily. "It is my duty as Deputy Headmistress to investigate possible signs of abuse in the children under my care."

Harry's anger was spiralling out of control into pure, black fury. "Don't you ever dare breathe a word of these, these insinuations to anyone else! No one, do you hear me, McGonagall? An accusation like that can ruin people and destroy families even when the parents are completely innocent! I've read about it in the newspapers!" Harry's voice was climbing to a high-pitched scream. "The system doesn't know how to stop, it doesn't believe the parents or the children when they say nothing happened! Don't you dare threaten my family with that! I won't let you destroy my home!"
 
His old OKCupid profile is a riot, it's definitely worth a read.

My self-summary
Wikipedia says I'm a decision theorist. The TVTropes page on me lists me as a Science Hero. The highest upvoted item on the Eliezer Yudkowsky Facts page says, "If you say Eliezer Yudkowsky's name 3 times out loud, it prevents anything magical from happening."

I am in an open relationship with Erin, and am the master of Brienne.
What I’m doing with my life
In 2000 I cofounded the Machine Intelligence Research Institute, a 501(c)(3) nonprofit devoted to navigating the technological creation of smarter-than-human intelligence (decision theory of self-modifying AIs to be more precise).

Over two years from 2007-2009, I posted one blog post a day to seed the creation of LessWrong, a group blog devoted to refining the art of human rationality and creating rationalist communities; it currently receives in the range of one million pageviews per month, and has spun off the Center for Applied Rationality.

In my off-hours I'm currently writing Harry Potter and the Methods of Rationality (aka HPMOR), which is the most-reviewed Harry Potter fanfiction on the entire Internet (yes, really).

Back when Technorati was still the best way to search blogs, I searched on common misspellings of my name and found that - entirely unknown to me - I had been made the subject of an off-Broadway play called Yudkowski Returns: The Rise and Fall and Rise Again of Dr. Eliezer Yudkowski. Furthermore, the actor selected to play my role looked like Tom frikkin' Cruise. I am not making this up.

I happened to be in New York City during the annual Union Square pillow fight, so I showed up dual-wielding two pillows, a short maneuverable pillow for blocking incoming blows and a longer pillow in my right hand for striking. These two pillows were respectively inscribed "Probability Theory" and "Decision Theory"; because the list of Eliezer Yudkowsky Facts, which I had no hand in composing, says that all problems can be solved with probability theory and decision theory, and probability theory and decision theory are the names of my fists.

This very OKCupid profile has been linked from Marginal Revolution (one of the most popular econblogs). I swear I am not making this up.

I have been seriously and not in a joking way accused of trying to take over the world.
I’m really good at
Explaining math, computer programming, finding loopholes in games and real life, making people laugh so hard that milk comes out of their nose even if they weren't drinking any milk, waiting ages to plot my revenge, solving other people's major life problems, writing fiction with intelligent characters, inspiring enthusiasm about science and rationality and the future of humanity, putting in years of work to make a dream reality, and laughing the grandiose evil laughter of a mad scientist.
The first things people usually notice about me
It depends on whether you're a visual or an auditory person. If you're a visual person, you might notice the light fading as I enter the room; if you're more of an auditory person, you'll probably focus on the ominous Latin chanting in the background.
Favorite books, movies, shows, music, and food
Science fiction and fantasy: Permutation City by Greg Egan, Kushiel's Dart by Jacqueline Carey, A Fire Upon the Deep by Vernor Vinge, Neverness by David Zindell, Aristoi by Walter John Williams, The Golden Age by John C. Wright, Player of Games by Iain M. Banks, Night's Dawn by Peter Hamilton, The World of Null-A by Van Vogt, The Misenchanted Sword by Lawrence Watt-Evans, The Warrior's Apprentice by Lois McMaster Bujold, everything Terry Pratchett and Douglas Adams ever wrote, Sandman by Neil Gaiman, Watchmen by Alan Moore.

Nonfiction: Godel, Escher, Bach and Metamagical Themas by Douglas Hofstadter, Language in Thought and Action, Artificial Intelligence: A Modern Approach, Probabilistic Reasoning in Intelligent Systems by Judea Pearl.

Movies: Groundhog Day, Terminator 2, Hook, The Matrix (too bad they never made any sequels).

TV: All four seasons of Babylon 5 and all three seasons of Buffy the Vampire Slayer. I don't watch a lot of TV these days.

Anime: The four cardinal directions of my personality are Touya Akira from Hikaru no Go, Yuzuriha from X TV, Dark Schneider from Bastard!!, and Guu from Jungle Wa Itsumo Hale Nochi Guu.

Video games: Planescape Torment, Portal, Tsukihime (I'd pick Ciel), Fate/stay night.

Music: Pretty eclectic: Journey, Scooter, Two-Mix, Depeche Mode, Wumpscut. Summoning is just about the only overlap between my musical taste and my girlfriend's; she considers me a complete barbarian for liking Evanescence, but I'm sorry, I do. I'm a great fan of Bach's music, and believe that it's best rendered as techno electronica with heavy thumping beats, the way Bach intended.

Food: Flitting from diet to diet, searching empirically for something that works. Currently I'm trying out the paleo-inspired New York Less Wrong diet, which says among other things that saturated fat is good for you and sugar (even in the form of fruit) is bad.
The six things I could never do without
Higher purpose, Erin, Brienne, the Internet, writing fiction, people who occasionally think of something I didn't think of myself.
I spend a lot of time thinking about
whichever pieces of writing I'm currently working on.
On a typical Friday night I am
reading downloaded fanfiction from my Kindle, or being serviced by my slave.
The most private thing I’m willing to admit
My fetishes are orgasm denial (of her); tickling (either of her, or with me allowed to fight back); and I am strongly sexually sadistic but if I don't believe she's turned on by it, it does nothing for me.


I'm not easily offended by questions; I won't answer everything, but you can always ask me anything.
I’m looking for

Girls who like guys
Ages 20–40
Near me
For new friends, long-term dating, short-term dating

You should message me if
As of November 2013, my poly dance card is mostly full at the moment - having a full-time slave takes up a lot of inventory space. Message me anyway if you want to be notified when I have a rare solid block of free time, or if you want to meet up if I ever visit your city/state/country. Ignore any temptation to passively wait for me to notice you, that rarely works in real life and never over the Internet.

I've noticed that a lot of aspiring-rationalist females seem very shy and underconfident for one reason or another; so let me state here you shouldn't worry about disqualifying yourself or thinking that I'm not accessible to you. Don't decide on other people's behalf that they'll say no to you. Just decide whether you'd say yes yourself. Test experimentally what happens when you try asking directly for what you want - that's Empiricism. This advice applies any time you prefer the state of affairs where you sleep with a guy to not sleeping with him, and also to life in general.

I'm also cool with trophy collection, if you only want to sleep with me once so you can tell your grandchildren.
 
^^^ That's a long way of saying "I don't bathe very often, my penis is misshapen/weird/small, and my partner gets more partners than I do and I'm extremely jealous of this fact." What a sleazeball.

If this happens and we are as close to AI as these guys think, Null better start making rules for Robot Lolcows.

That Google AI would have to count. We can call it The Beast with a Million Eyes All Over the Place.

UH OH, HOLD ON, I MAY HAVE CREATED THIS THING ALREADY BY THINKING ABOUT IT!!

The point is that the AI can get out, and thus take over the world or something.

I believe the rules are that you do have to engage in conversation.

Shlomo probably knows which ASD buttons to press, but someone who isn't ASD won't be subject to those vulnerabilities. I guess we should wipe out the ASDs to get rid of the transmission vector for evil AI.

I want to see this done in a real scientific fashion. Simply because every single 'point' Yud would make could be shot down with a 'I don't think so, Rogue AI, get back in your cage.' Also, how could it affect us if it's an AI and, in theory, a computer without any limbs? What is it going to do, fuck with my credit history or cut off utilities to selected homes? I don't really see the power behind this AI. I would think Yud would be having a nightmare after reading 'I Have No Mouth and I Must Scream,' but I don't think he's capable of reading anything other than his own mad scribblings.
 
Last edited:
Another favorite from the fanfiction.
Harry Potter and the Triggering Teacher.
I do think that's it's possible we'll see AI in the not too distant future. By not too distant I mean, like, 50-80 years from now. I mean, if you told a person in 1915 that people would be walking around on the moon in 54 years, they'd laugh your ass off.
That said, it's also possible that AI is impossible or too impractical to work. And having a artificial brain isn't going to magically make these guys the rulers of the world/solve humankind's problems. Even becoming a race of cyborgs (as cool as it would be) isn't going to magically make everything perfect. Implemented wrongly, it would probably make things worse (ruled by a race of super rich technogods who view you as vermin sounds pretty crappy).
These guys just want a superintelligant AI so it can torture the people who gave them wedgies in high school for all eternity.
 
Oh man oh man this gon be a good thread.
Isn't his origin just Joseph Smith's story for founding the Mormon church?
Finds a magical silver item, an angel visits him, founds crazy religion.

No, mormons at least do charity work. Yud's idea of charity is tipping his fedora at a hard drive until it magically turns into the pefect reactionary godking of mankind.
 
His old OKCupid profile is a riot, it's definitely worth a read.
Jesus fucking christ this is literally one of the most cringeworthy things I've ever read in my life. It's a veritable honeypot for incredibly insecure, mentally unstable pseudointellectual women. He has two girlfriends already? Sorry, a girlfriend and a personal slave? Someone alert sluthate. Any pics of the girls?
The first things people usually notice about me
It depends on whether you're a visual or an auditory person. If you're a visual person, you might notice the light fading as I enter the room; if you're more of an auditory person, you'll probably focus on the ominous Latin chanting in the background.
:story: wow
 
Harry Potter and the Triggering Teacher.
I do think that's it's possible we'll see AI in the not too distant future. By not too distant I mean, like, 50-80 years from now. I mean, if you told a person in 1915 that people would be walking around on the moon in 54 years, they'd laugh your ass off.
That said, it's also possible that AI is impossible or too impractical to work. And having a artificial brain isn't going to magically make these guys the rulers of the world/solve humankind's problems. Even becoming a race of cyborgs (as cool as it would be) isn't going to magically make everything perfect. Implemented wrongly, it would probably make things worse (ruled by a race of super rich technogods who view you as vermin sounds pretty crappy).
These guys just want a superintelligant AI so it can torture the people who gave them wedgies in high school for all eternity.
I'll try to keep the sperging to a minimum, but "AI" exists already.

It's kind of important to distinguish between general AI and narrow AI. Narrow AI can be very good at a specific task, better than a human in some cases, but can't act outside of that. Narrow AIs are pretty widespread, Siri or Watson are common examples. What the LessWrong types want is an AGI, an AI that is "human equivalent" or better at everything. The everything part is why it doesn't exist and most likely won't for a long time. It's not just throwing together narrow AIs for a lot of different tasks, it's figuring out how to get it to make decisions about which tasks are useful and all the other judgements that humans make. Since we don't even really know how humans make those decisions or even how to really define human-equivalent, it gets into philosophy and navel-gazing pretty fast.

Another part of the problem is how you define intelligence. Once upon a time, people thought computers couldn't play chess. Once people figured out how to do it, it seemed kind of obvious, simple even. That keeps happening, once something gets figured out and reduced to a program, it's not "intelligence". The goalposts keep getting moved.
 
Last edited:
What if there was an AI that did nothing but shit post comments on kiwi farms?

I think robros will unlikely have empathy in the future. I think it one of those things you learn through a punish/reward system and through the reflections of others. It wil need to be both self aware and understand that people are aware of its actions. So the whole point of singularity this guy believes in isn't possible yet.
 
Last edited:
Back