Eliezer Schlomo Yudkowsky / LessWrong

According to TV Tropes (as it turns out, Schlomo's a big tropefag), Eliezer Yudkowsky has written two other fanfics: Trust in God, or, The Riddle of Kyon, a Haruhi Suzumiya fic, and The Finale of the Ultimate Meta Mega Crossover, which is... a crossover, I guess? I've never so much as looked at either of them, but they both sound terrible from the synopses that TV Tropes has proffered.

upload_2015-8-4_23-24-27.png

I think Eliezer is completely incapable of writing a story without shoving his antitheist rhetoric into it. Regardless of anything else, if your fanfic rec designed to shill your story says it has "sizable plotholes", you're doing fanfic recs wrong.


upload_2015-8-4_23-4-38.png

If a fanfiction is described by TVT as "troperiffic" or "meta", chances are it's very bad and tries way too hard, so I'm not even going to try with this one.
 
I think Eliezer is completely incapable of writing a story without shoving his antitheist rhetoric into it.

LessWrong culties feel compelled to proselytize in any venue you find them. When they write fanfics, they're all for the purpose of spreading the gospel. They're as bad as Scientologists.
 
I'll try to keep the sperging to a minimum, but "AI" exists already.

It's kind of important to distinguish between general AI and narrow AI. Narrow AI can be very good at a specific task, better than a human in some cases, but can't act outside of that. Narrow AIs are pretty widespread, Siri or Watson are common examples. What the LessWrong types want is an AGI, an AI that is "human equivalent" or better at everything. The everything part is why it doesn't exist and most likely won't for a long time. It's not just throwing together narrow AIs for a lot of different tasks, it's figuring out how to get it to make decisions about which tasks are useful and all the other judgements that humans make. Since we don't even really know how humans make those decisions or even how to really define human-equivalent, it gets into philosophy and navel-gazing pretty fast.

Another part of the problem is how you define intelligence. Once upon a time, people thought computers couldn't play chess. Once people figured out how to do it, it seemed kind of obvious, simple even. That keeps happening, once something gets figured out and reduced to a program, it's not "intelligence". The goalposts keep getting moved.

That's why I mentioned the "model of reality" bit from earlier. Yes, it's possible to write an AI that can "learn" on it's own, but it's still constrained to the tasks it was built for. Siri/Cortana wouldn't start crashing the stock market just because it found out it was possible because it's own programming is limited in it's functions. Perhaps it can tell you an ideal method of crashing the stock market, and a person could decide to go through with it, but ultimately the AI is incapable of suddenly rewriting it's own intelligence to give itself the ability to preform outside of it's functions.

 
I'm going to be Devil's Advocate here, mostly because rational decision theory is something I've been interested in.

I've actually read a bit of "Harry Potter and the Methods of Rationality." It's actually the only fanfic I've ever liked, though it's not a fanfic in the traditional sense but mostly using Harry Potter as a way of conveying the guy's ideas. It's like how Sartre wrote traditional treatises on existentialism but he also wrote a bunch of novels to convey his ideas to a more popular audience. It's got some fun scenes, like Harry trying to destroy the wizarding economy and a part of saying how Quidditch is a horrible game because it's really about catching the snitch and the whole thing with shooting the quaffle through the goal is just drudgery.

I sometimes get amazed that people have all sorts of biases and people never stop to think about them. I have my own biases for certain and I'd like to try and overcome them to make better decisions. In my life, I've found that I've let my emotions make decisions and they usually end up being bad ones. It's a bit nuanced than being like Mr. Spock all the time, they specifically dislike Mr. Spock as a model of rationality, and explain that emotions can serve a valuable function.

Posts like this was why I was initially very skeptical about having a lolcow board as ultimately it can turn into "I don't like this person, therefore they're a lolcow." If you want to disagree with him, alright, but I don't think that they're a lolcow. They don't have to be a manchild who obsesses over Sonic or anime but just because you disagree with their ideas doesn't make them one. I never really got far into it, I've read that some of the hardcore people end up treating it like a cult and there are features of a cult, like a very specialized jargon known to "members," which is probably the opposite of what was intended. A splinter group who uses this to justify racial superiority probably could be considered a lolcow, at least my opinion.
 
So, to anyone who thinks that intelligent AI is literally only a few years away, I invite you to play Ride to Hell: Retribution.

If that's intelligent AI, then we really set the bar low.

xiDVxVi.png

Wow! SO much hilarity, so much that could have been prevented by washing your own fucking jeans. Five people in this house and they're all lazy assholes.
I find it slightly hilarious that he wrote up a huge sperging about, of all things, fucking jeans.

I think he's supposed to be taking a trivial thing and making it into this huge intelligent discourse, like that whole thing about how Newton got bonked by an apple and it led to the laws of physics (isn't that a myth btw?), but it just comes off as rambling for the sake of rambling.

@champthom

I'm sure the fic must have something for it, as it seems to have a large cult following, but the issue I personally have is, as a narrative, it's kinda terrible. If he wants to write intellectual discourses, then more power to him, but his attempts to weave them into a story just show how absolutely terrible he is at writing a story.
 
Also if Schlomo doesn't like his nerd cult website associated with fascist neckbeardsd then why doesn't he ban them or at least denounce them in a non-pussy way?

They have their own website for sperging about how they're the master race despite having no job skills, education, or pants sizes below 50.

Yud has nothing to gain by keeping these fuckers around and a lot to lose (Good luck not being lumped in with Stormfront, Yud boy.)

That and even rationalwiki have taken his dick out of their mouths long enough to dig up some unsavory shit about him.

So there are two possibilities:

1. Schlomo is too goddamn stupid to understand why the Aspergerschutzstaffel are a liability to his site and his goals.

2. Schlomo doesn't want to boot the fascist neckbeards because he's one of them.

"I'm sure the fic must have something for it, as it seems to have a large cult following"

You could say the same thing for Amway.
 
I would think Yud would be having a nightmare after reading 'I Have No Mouth and I Must Scream,' but I don't think he's capable of reading anything other than his own mad scribblings.
For these dorks it's "I Have No Ass and I Must Fart".

He has two girlfriends already? Sorry, a girlfriend and a personal slave? Someone alert sluthate. Any pics of the girls?
I'm expecting something like this:
Ultimate skank.jpg
 
I agree that this guy isn't a lolcow. He's just a deluded narcissist that is running a cult.
The biggest accomplishments he has is scamming money from people, running a cult of personality for neck beards, and writing a fanfic.
I tried, but I can't find anything that he's actually contributed to. No scientific writings in peer reviewed journals. No coding. Nothing.

He doesn't chimp out. He doesn't post lol stuff. Maybe his followers do. But he just seems like an autistic Jim Jones to me.
 
That's why I mentioned the "model of reality" bit from earlier. Yes, it's possible to write an AI that can "learn" on it's own, but it's still constrained to the tasks it was built for. Siri/Cortana wouldn't start crashing the stock market just because it found out it was possible because it's own programming is limited in it's functions. Perhaps it can tell you an ideal method of crashing the stock market, and a person could decide to go through with it, but ultimately the AI is incapable of suddenly rewriting it's own intelligence to give itself the ability to preform outside of it's functions.

True, I hate it when people pretend that the "singularity" and supercomputers with human intelligence are only a few years away. Most people don't seem to realize that AI is only effective in incredibly specific areas, and even then it takes years to make it effective. The idea of a singularity, or that a system can be proficient at literally everything, is simply absurd. Even if it would be able to teach itself certain things. Plus, like @Wally Balljacker said, a system can't just give itself new functions out of the blue.
 
Last edited:
So skimming through half of what this guy says, it really sounds like he's a fan of being esoteric and obtuse, since his writing is clunkier and filled with more jargon than the most dense journals you can find. What that means is that he's intentionally talking upwards to both sound smarter and to muddy what he says. To give you an idea, this is also a major tool used by cult leaders, since by talking upwards like that, he can convince his listeners that he's in the right by using words they may not understand to reinforce his own status as enlightened, and to hide the fact he's speaking bunkem.

He's applying what he knows and is melting it together with an outward appearance of confidence and words that fizzle in the brains of those listening, like any other good snake oil salesman.

As an aside, his "convincing immobile AI" thought experiment can be solved as easily as the Gordian Knot; apply sufficient amount of Dakka and it won't be an AI no more. Or you just send @cat to mock it until transport's complete. Him demanding a conversation beyond these things are due to it being limited to his ability to convince, which considering his base audience being fans of him, ain't that hard. The Basilisk notion also is utter drek because it runs up to the problem of not just running against the notion of a humane friendly AI, but the notion of your clone being YOU. Similar to the Teleportation Paradox, that clone might act and look like you, but it ain't; you died and that takes your place. Unless they don't buy that one because it's just a shitty thought experiment they didn't cook up.

And I've read some of his fanfiction; it's poor in the literary sense due to having an unpleasant tone made by an arrogant kid that is just a hamfisted author insert that is desperate to show off what they know. He probably also wrote his own recommendation to TvTropes with how obsessed he is about it (the website).

I'm interested in watching him and his dumb website, cow or no.
 
Last edited:
I agree that this guy isn't a lolcow. He's just a deluded narcissist that is running a cult.
The biggest accomplishments he has is scamming money from people, running a cult of personality for neck beards, and writing a fanfic.
I tried, but I can't find anything that he's actually contributed to. No scientific writings in peer reviewed journals. No coding. Nothing.

He doesn't chimp out. He doesn't post lol stuff. Maybe his followers do. But he just seems like an autistic Jim Jones to me.

I think it's one thing to disagree with the guy or to maybe find particular people who take this way too seriously and talk about them in a "huh, these people are sorta kooky" kind of way. I don't think that's necessarily the same thing as being a lolcow. I realize Null broadened the board's definition of lolcow, I personally have a narrower definition, but I also concede that a lolcow isn't just some overweight neckbeard who has a Mary Sue Sonic recolor. Maybe if OP was focused more on the splinter group or finding some particularly crazy people on the LessWrong wiki than just "this guy thinks everyone should be rational," I think it'd be a better fit.

From what I've heard, he's sorta taken a lot of stuff that has been done by actual psychologists and economists who study this sort of thing and more or less mold it into a specific worldview with an attempt to make it more widespread and practical for people.

I think this might be a good post for "Deep Thoughts" if it was reworded more neutrally. Maybe there should be a subforum of just weird people on the Internet and not necessarily strictly lolcows. Or maybe I do have a really narrow view of what a lolcow is. I just know that I've seen that often times, people will equate people they don't personally agree with as being lolcows and instead of discussing the merits of their ideas, it becomes personal attacks on the person involved.

Of course, I'm no longer an admin so my opinions mean jack shit.
 
I'm going to be Devil's Advocate here, mostly because rational decision theory is something I've been interested in.

I've actually read a bit of "Harry Potter and the Methods of Rationality." It's actually the only fanfic I've ever liked, though it's not a fanfic in the traditional sense but mostly using Harry Potter as a way of conveying the guy's ideas. It's like how Sartre wrote traditional treatises on existentialism but he also wrote a bunch of novels to convey his ideas to a more popular audience. It's got some fun scenes, like Harry trying to destroy the wizarding economy and a part of saying how Quidditch is a horrible game because it's really about catching the snitch and the whole thing with shooting the quaffle through the goal is just drudgery.

I sometimes get amazed that people have all sorts of biases and people never stop to think about them. I have my own biases for certain and I'd like to try and overcome them to make better decisions. In my life, I've found that I've let my emotions make decisions and they usually end up being bad ones. It's a bit nuanced than being like Mr. Spock all the time, they specifically dislike Mr. Spock as a model of rationality, and explain that emotions can serve a valuable function.

Posts like this was why I was initially very skeptical about having a lolcow board as ultimately it can turn into "I don't like this person, therefore they're a lolcow." If you want to disagree with him, alright, but I don't think that they're a lolcow. They don't have to be a manchild who obsesses over Sonic or anime but just because you disagree with their ideas doesn't make them one. I never really got far into it, I've read that some of the hardcore people end up treating it like a cult and there are features of a cult, like a very specialized jargon known to "members," which is probably the opposite of what was intended. A splinter group who uses this to justify racial superiority probably could be considered a lolcow, at least my opinion.
I have interest in Yudkowsky himself because in my opinion the mans grandiose sense of self worth and terrible Harry potter fanfic make me laugh. I find comedy in people that pretend they are incredibly smart while circle jerking around creating concepts to fit their little evil robot ideas. You are correct that there should be more focus on the users of the website because as @melty has found they will straight up defend zoophilia. Now I don't think every user is a lolcow, it's not as bad as sluthate or something like that but it's users still do make me laugh and that's what I consider being a lolcow, weird and strange enough to make me laugh.
 
Last edited:
The last thing worth bringing up about his stupid story: he has never actually read any official Harry Potter work.

He only used fanfiction and Wikipedia as source materials.

That's kind of terrible for a so-called scientist.
 
I think this might be a good post for "Deep Thoughts" if it was reworded more neutrally. Maybe there should be a subforum of just weird people on the Internet and not necessarily strictly lolcows. Or maybe I do have a really narrow view of what a lolcow is. I just know that I've seen that often times, people will equate people they don't personally agree with as being lolcows and instead of discussing the merits of their ideas, it becomes personal attacks on the person involved.
I think that, depending on how you go about defining who or what is a "lolcow," you could somehow say that anyone's a lolcow. A lot of the people that have gotten some start here on a variety of reasons, from their ineptitude and smugness (Dobson) to their pathological lying and cult of personality (Wu, almost everyone on Tumblr) to their incredibly shitty and unoriginal artwork and childishness (KirbyKrisis), to their fucking weird and creepy fetishies (Deeker), to psuedoscientific jargon (this guy), and in some events a cow becomes popular due to the audience he attracts and not because of himself. John Enter is probably the biggest example of a cow whose fanbase ended up being more of a discussion point than the man himself (although when we made contact with said fanbase, most of us wanted to drink ourselves into a coma). People are into a lot of weird shit and have done a lot of lulzy things, and if you dig up enough evidence and word it well enough then you could theoretically turn anyone into a lolcow if you have enough dirt and are clever enough with talking about it. These threads tend to live and die based on the discussion it generates, whether it be about the cow itself, the audience he has, or some other issues regarding the cow (and the third of these three usually involves a lot of shitposting and cringing).

I do think that having an expanded definition of having a lolcow is nice, because I think gems such as the Loveshys and Len may have been overlooked otherwise, but I also agree that having said definition can lead to people getting very defensive or getting too involved because the discussion begins to bleed into their hobbies, interests, etc. The Lolita thread that came up recently is probably the greatest example of this since it turns out a lot of people Power Leveled and showed their interest in Lolita to a distressing degree, and it all went downhill faster than "DOX ME LEN-SENPAI SEND ME PIZZAS."

Also, champthom, you have more veterancy on these matters than a hell of a lot of us, so your opinions still matter. :D

But I'm off-topic sperging at this rate, sooooooooooo...

*ahem*

I just wanna say in case I haven't already, but if you wanna try and make some sort of philosophical or scientific point, you generally don't make it look any good by associating it with Harry Potter or ponies.
 
Back