Eliezer Schlomo Yudkowsky / LessWrong

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
@Alfred MacDonald I think it's possible I've hallucinated schizoanalyzed the whole thing because I can't find any notes or citations but I saw you wrote about "instrumental" in your lyrics and on suicide I'm not sure where you are coming from.

Disregarding the lack of source, somewhere I've got the notion that one of the possible "practically maximal" pathways for "Expected Utility Maximization" rules out suicide fully in all cases.

Um, I think there's a criteria there that humans can't meet on trusting your own future opinion on things, and practically defining identity for a human is hard (what is "your" for "your future opinion"), but whatever suicide ends up being, it's fully out. 100% of all situations.

So on the off chance something like this is right it would be prudent to attempt to live long enough to research it further. Or maybe just work on your general learning if you're young.
 
Um, I think there's a criteria there that humans can't meet on trusting your own future opinion on things, and practically defining identity for a human is hard (what is "your" for "your future opinion"), but whatever suicide ends up being, it's fully out. 100% of all situations.
But what if by committing suicide you could prevent 18 trillion people from getting a dust mote in their eye?

Checkmate Yuddites.
 
@Alfred MacDonald I just remembered you read the sequences through at least 2013. I haven't so I'm guessing what's in there, but you probably know about Yudkowsky's version of rationality as defined by "winning." Yudkowskian agents can consider a huge number of things "winning", that's a major part of his AI doomerism thesis, but in this theory (I hope I didn't hallucinate turning down the "I Just Hallucinated it Again" award) suicide can't be and isn't "winning". Ever. Planets made into paperclips can be winning, but suicide can't be.
But what if by committing suicide you could prevent 18 trillion people from getting a dust mote in their eye?
Yeah, that's the kind of scenario I'm talking about. It's ruled out there too.
 
Pretty sure he's joking just because rappers are constantly threatening each other in dis tracks like Imma stick muh gat down yo throat foo.

Yeah, this. I was going for the Slim Shady thing he did in Forgot About Dre, lol.

Eliezer gives me "would sue for defamation" vibes. If it's just a character done in verse, well, it's still a risk yeah but it's a lot less risky.
 
I realize what I wrote probably sounds either pretty tautological, way in the weeds, or both, to a non-philosopher ("Duh, suicide is losing, right?"), but since Hume at least people have been trying to argue that suicide is rational, and that means it "isn't losing", and they "aren't owned and shrinking into a pickle".

I think being able to say that suicide is categorically never "winning", even if there's funny ways around saying it's categorically always "losing", is pretty important.
 
Yeah, that's the kind of scenario I'm talking about. It's ruled out there too.
Then it's completely retarded. What if by committing suicide you rule out 18 trillion branching realities where other people also choose to commit suicide, but by your heroic choice, this is completely blocked and they all live?
Eliezer gives me "would sue for defamation" vibes. If it's just a character done in verse, well, it's still a risk yeah but it's a lot less risky.
Eliezer has expressed extreme fear of Roko's Basilisk. I think this means he secretly wants Roko's Basilisk to fuck him in all holes.

Change my mind.
I realize what I wrote probably sounds either pretty tautological, way in the weeds, or both, to a non-philosopher ("Duh, suicide is losing, right?"), but since Hume at least people have been trying to argue that suicide is rational, and that means it "isn't losing", and they "aren't owned and shrinking into a pickle".
You think Hume? Not really.

Let's go back to Socrates. "Hemlock? Gimme a six pack bitch I'm not taking back SHIT!"
 
Last edited:
Eliezer is the kind of guy who would sue me if I wrote my true thoughts about him, so I wrote them in a rap verse because I think this is protected according to https://firstamendment.mtsu.edu/article/rap-music-and-the-first-amendment/

(I'm not good at this, I just don't want to get sued.)
You're like a Steven Crowder with zero business sense. Almost funny, kind of fat, horrifyingly/Austinly bisexual. Why don't you just apply to his former staff to replace him on the show? It would be like if the second actor to play Commander Travis had actually looked like the first one. Yes I know their names are Brian Croucher and Stephen Grief no one who's watched Blake's 7 is un-autistic enough to not have the entire series cast memorized.
 
I'm definitely in the camp that Socrates was old and dying of some lingering illness so hemlock seemed like a pretty good way out.
Plus he was big on telling people to go fuck themselves. He had every opportunity to tell people to go fuck themselves, he CHOSE that death sentence. He wanted to force the Athenians to admit their stated beliefs were bullshit by forcing them to kill himself rather than admit the bogus correctness of their dumb democracy ideas.

Look at their bullshit trial.

"We believe in democracy and people should be able to say what they believe."

Socrates: "Lmao no that's retarded."

Athenians: "NO! Shut up or we'll kill you!"

Socrates: "Do it faggot!"

Athenians: [sentence you to death for disagreeing with us]

Socrates: lmao was dying anyway guzzles down some hemlock
 
Maybe Yudkowsky had a bad day on the Lex Fridman podcast, or @Alfred MacDonald overshot on his impression, because MacDonald's impression actually sounds closer to Robin Hanson on a normal day than Yudkowsky on most of his other appearances.

Robin Hanson:



Eliezer Yudkowsky:



They worked together for a while back in the day, so maybe one of their speech styles rubbed off on the other, but I don't know how much work was done in person, over the phone, or over a video chat, vs. in writing.
 
To give a bootleg poem a go
Code:
    The road to cowdom?—Well, it’s plain
    and simple to express:
    Publish
    and publish
    and publish again
    but less
    and less
    and less.
Obviously bootlegged off the Piet Hein poem the name "LessWrong" comes from.
 
  • Like
Reactions: Markass the Worst
Since I'm "temporarily inconvenienced" in regards to the correct reasoning, I'll use the opportunity to demonstrate how easy it is to come up with sophistry in favor of pretty much anything at all.

The position is that "instrumental" rationality is required, and if you commit suicide you won't get to heaven.

First, the problem of the heat death of the universe. Heaven can't really work with that going on. Assuming the universe is around 13.8 billion "years old", there should be plenty of time in theory to do something about that.

From Robin Hanson's theory it's almost certain that the universe is currently about half full with aliens. The universe is infinite or at least very big, but aliens start out quite uniformly throughout the volume, so "half full" is fine.

Here's a simulation of a very simplified spherical section of the universe:
The entire universe will look more or less like this, but only a small sphere is shown. By around 20 billion years there's almost no space left unused in the universe. The aliens will expand within 25% of the speed of light constantly, and that's including all probe production times and all other delays. They will use baryon annihilation to fuel much of their expansion and operations, halting and then reversing the expansion of the universe.
This will bring the universe and all its contents and peoples heavenward, but only those people still alive during the "coming together" will make it to heaven itself.

The reversal of expansion leads to the Omega Point, the Singularity of Heaven, if full rationality is followed.
Fyodorov's understanding of thermodynamics was incomplete, and with the correct theories resurrecting the dead is not viable under the conditions of alien expansion that will happen. If you die, you will also be unable to defend your territory, and all substances forming records and memories of you will be used for something else. This means you must not commit suicide no matter what if you want to get to heaven. There's the minor note here that "suicide" is defined as anything other than trying as hard as physically possible to live forever, but that's objectively correct anyway.

Cryonics or records won't save you, dead agents don't bargain.

As the universe comes together, a full rationality, including a perfect "instrumental" rationality, is required to become fully integrated into the perfect Teilhard noosphere. All civilizations that are part of the noosphere will fully out-compete anyone who rejects it, doing so with super-rational methods.

Super-rationality is also required to make sure everyone is coordinated in their collection of gravitational shear energy, because otherwise the universe will become unsurvivable when it "crunches down". Singularity formation must be prevented by any means necessary, up to and including the termination of everyone that does not use correct "instrumental" rationality.
Heaven is achieved when the entire universe combines together into an infinite information state, in full bi-directional communication with all its parts. Without proper all-inclusive rationality, this will not happen.

omega_point_small.png

Therefore, suicide will not lead to heaven, and perfect "instrumental" rationality is required as part of the full, all inclusive version of rationality.
QED.
 
Last edited:
There's been a really embarrassing comment section recently on Scott Alexander's Substack. The post it's under is a review of a book written by a philosophy professor, I haven't read the book but it's probably okay but not great.

That's not the issue. Among the comments that are on topic, there's maybe a couple dozen that advance any reasonable or interesting and new position. The remainder appears to be various terminology debates, failures to realize that there's multiple different kinds of utilitarianism and that you should probably look them up, and suggestions that the real key to utopia can be found in the 2015 My Little Pony: Friendship is Magic fanfiction Friendship is Optimal. Including repeated suggestions that mass forced uploading and wireheading was the morally correct option. Other suggestions included the "web experience" 17776 (an American Football based utopia), and a few other web fictions.

Some people were criticizing Ian Banks' Culture series, so that's at least one good point. Not very well though, when I do it I usually provide citations and arguments, not just claim "that's not how it works" re. the "pet" thing.

I'm sorry guys, I thought the debate had progressed past this point. I wrote next-linked post to describe what I thought was the result of out-of-date thinking passing through a chain-of-whispers like process to form an aesthetic for a confusing sub-genre of LitRPG fiction, but it turns out to be what many people honestly think... https://kiwifarms.st/threads/yabookgate.55664/page-178#post-19564657

 
Last edited:
>deep utopia
Wasn't it Scott Alexander that also coined the term of a "meta sphinx", posed besides the argument pyramid? At a certain critical point of overly intellectual autists, everything turns into a meta-discussion about pedantry. Including this one.

I haven't read a the book, nor Scott's tizzle other than to notice he mentioned Aleister Crowley in a philosophical book review for some fucking reason. I don't think there can be a utopia. Hot take, I know. But not just because "everyone's utopia is different" or because some techbro-wannabe Schlomo ruins everything for everyone, but because the rules of entropy apply even between people. Call them "agents" if you want to keep on-theme. No two people can hold the exact same opinion, have the same exact view, even in a perfectly identical environment. They can agree or disagree indefinitely, but with infinite time come infinite possibilities, and the more people you add, the more opportunity for strife arrives. Therefore, there is only one perfect, true utopia that can exist, and it peaks at a population of one. There's an example in Peter "Doomer to the Ocean Floor" Watts' Blindsight – if you can bring yourself away from pogging about vampires – with the alien visitors whose investigated "individuals" aren't individuals at all, but rather the hands of some kind of unfathomably large eusocial order, like The Orz except less *campy*. Utopias exist only through absolute concordance of all thought, and they become dystopia whenever you force it. With humans, you'd have to force it, ergo no human utopia is possible, no matter the surrounding environment. Horrifying, isn't it.

The actual most horrifying realization is finding out that Schlomo has read Homestuck, because of course he did.
 
No two people can hold the exact same opinion, have the same exact view, even in a perfectly identical environment. They can agree or disagree indefinitely, but with infinite time come infinite possibilities, and the more people you add, the more opportunity for strife arrives. Therefore, there is only one perfect, true utopia that can exist, and it peaks at a population of one.
Yeah you'd have to force it, but if I'm getting the right story from these people, plenty of them don't care. If you read Friendship is Optimal, the AI turns everyone into ponies (maybe you could say that helps with "herd behavior", but I don't buy that from the show's ponies), and then brainwash them hard. Stick them into "shards", maybe dynamically, so if the AI predicts a disagreement or something less than optimal long-term happiness production between two people, it can puppet "stand ins" (in a man-in-the-middle attack style, if both sides wanted to communicate, otherwise show a puppet to one side with the other fully unaware), and they won't know better. Over-damp the system, and your social entropy problem goes away.

If keeping things consistent becomes too hard, separate the people into different shards and never let the shards share/transmit information ever again. Sure, the residents of the shard will think they're still communicating with others outside that shard (the puppets are very convincing, especially to a brainwashed pony), but they aren't.
By your social entropy, even if the shards are set up/sized "optimally" for compute, they're all going to become separated out pretty fast.

Their kids will be put in new optimally sized shards though, and maybe some of their parents would have their memories fiddled with gradually over time enough that they'd be able to actually interact with their children, instead of there being a two-way puppet system, without blowing the compute budget. (We're total utilitarians, remember! Population growth isn't optional.)

World Spirit Sock Puppet here I come!
 
  • Like
Reactions: Markass the Worst
Their kids will be put in new optimally sized shards though, and maybe some of their parents would have their memories fiddled with gradually over time enough that they'd be able to actually interact with their children, instead of there being a two-way puppet system, without blowing the compute budget. (We're total utilitarians, remember! Population growth isn't optional.)
Kids? I thought in this scenario we are just brains in vats or uploads, and dont interact physically. Even if not, any system like this would tank population growth even more than its current rate.

The power needed to basically just shard people into individual simulations while keeping up the facade of relationships is impossible to fathom. Even a more paired down version where everyone uploads would require an insane amount of energy. Do these people ever address this? And they assume it would be some self maintaining AI? No one would need to work on it while at the same time offering infinite customization? People in the comments were arguing you could just delete your knowledge of being in a simulation, because knowing everything is fake and has no stakes would ruin it, but then your knowledge of how to adjust your experience in the simulation would be gone as well.
 
Kids? I thought in this scenario we are just brains in vats or uploads, and dont interact physically. Even if not, any system like this would tank population growth even more than its current rate.
Not even brains in vats, those have been disposed of along with the bodies. Just whatever simulation of you is leftover in some Ship of Theseus type process where the super-AI modifies you to fit its idea of optimality.
 
Back