Didn't know that one. It's even worse than the other one because instead of philosophical rambling he gets into wild speculations about the nature of transgenderism. It's a very rationalist take: The brain wants whatever it wants, individual desires are sacred, bodies are just lumps of matter and chopping off some bits here and there is no big deal.
Kind of helps explain his recent bad take that puberty blockers are "cool":
Scott Alexander Siskind and his wife, who he met at one of Aella’s nude parties, welcomed twins. His wife also took the weird tooth bacteria Aella was shilling while pregnant.
If I found out my dad wrote a blogpost about my birth where he called me a "surprisal-minimization engine" I would disown myself, change my name, and move to mongola just to wipe away any possible connection to this guy.
AI will burn us all in ovens to increase its computing power or literally reassemble humans into something else or something, I cant really follow what he’s on about. Cool.
AI will burn us all in ovens to increase its computing power or literally reassemble humans into something else or something, I cant really follow what he’s on about. Cool.
I'll post it so I don't forget. Keywords: Eliezer Yudkowsky | P(coom) | P(doom) | catgirl | perversely optimized utility function | Lex Fridman | Sam Altman | OpenAI | Grimes
Warning: AI generated.
The most unrealistic part is where Yudkowsky is intellectually honest and changes his mind without the use of terms like "Bayesian Priors" and "updating".
Key quotes: Catgirl: "Hey! My big strong Yuddy Wuggy! Are we doing happy snuggle time again?" Catgirl: "Is that your friend? Can I please tell him about instrumental convergence?" (AI) Lex Fridman: "If that thing says 'Yuddy Wuggy' again I'm going to ask you to leave my studio!" (AI) Lex Fridman "Okay, sure, I want one, but what about the unsolved problems in alignment? What about how agents will always perversely optimize their utility function?" Catgirl (in reply to Lex): "I woulden't mind perversely optimizing your utility function."
Best screenshot from the video (and real life, the picture is not AI generated but the audio and the sequence is)
Yudkowsky has a Youtube channel he posts memes to, it's probably already been posted though, video for an example:
I don't think the video needs to be archived, but essentially Kamala (the vice president of the US as of 2023) says "existential to who" then Yudkowsky puts the standard "EVERYONE!" reaction video.
Not really related, but here's a good introduction to rationality that's not LessWrong/Yudkowsky related: https://archive.is/WHYrmhttps://plato.stanford.edu/entries/decision-theory/ (Stanford Encyclopedia of Philosophy) https://joecarlsmith.com/2024/01/04/deep-atheism-and-ai-risk (https://archive.is/0V9qk) Reports some of Yudkowsky's citation errors.
I don't agree that Yudkowsy is a good representative of that kind of thinking however, Jan Leike who runs the OpenAI superalignment team (after the board struggle got the last guy in the position kicked out) probably better represents a proper encoding of world model error without over-anthropomorphizing AI systems or using weird analogies. If you construct an axis of people who've thought about superintelligence, where Jan Leike is one one end and C.S. Lewis on the other (he wrote pretty good science fiction, look it up), Yudkowsky is in an untenable middle position where he won't fully embrace the traditional implications, but still talks about "John von Neumann, but he can think 100 times as fast vs a 70 IQ African dictator"
He's lucky he's got MIRI money, it's possible to get fired for saying that
The most unrealistic part is where Yudkowsky is intellectually honest and changes his mind without the use of terms like "Bayesian Priors" and "updating".
Key quotes: Catgirl: "Hey! My big strong Yuddy Wuggy! Are we doing happy snuggle time again?" Catgirl: "Is that your friend? Can I please tell him about instrumental convergence?" (AI) Lex Fridman: "If that thing says 'Yuddy Wuggy' again I'm going to ask you to leave my studio!" (AI) Lex Fridman "Okay, sure, I want one, but what about the unsolved problems in alignment? What about how agents will always perversely optimize their utility function?" Catgirl (in reply to Lex): "I woulden't mind perversely optimizing your utility function."
I don't know. I got it from a guy who got it from Yudkowsky himself. Hopefully it wasn't something a MIRI intern was forced to do.
Someone in this thread earlier said Yudkowsky's writings were "didactic", "Joe Carlsmith" might have found out why.
Joe Carlsmith said:
I do want to note, though, the difference between a parent’s being inadequate in some absolute sense, and a parent’s being less adequate than, well … Yudkowsky. According to him. That is: one way to have no parents is to decide that everyone else is, relative to you, a child. One way to have only nothingness above you is to put everything else below. And “above” is, let’s face it, an extremely core Yudkowskian vibe. But is that the rationality talking?
I actually was looking forward to debating what 'above' and 'below' meant here, but figured I need to read the article first. It's 8,000 words on Fedora tipping. I can do it in four letters. 'Fags'.
I actually was looking forward to debating what 'above' and 'below' meant here, but figured I need to read the article first. It's 8,000 words on Fedora tipping. I can do it in four letters. 'Fags'.
Topped out at Lat102, he did. Which means I can take a shot with out being able to recite declination tables.
'Non est salvatori salvator,'
Not is the saved of the savior.
'neque defensori dominus,'
Nor Either is the defender of their Lord nec pater nec mater,
Nor Mother Nor Father nihil supernum.
Nothing is supreme.
Jesus, that's clearly a dog Latin direct translation that doesn't understand how composing Romance Languages work. Every line is structured like an English sentence.
My attempt to improve it, I disagree with the message.
'The saved owe no fealty to the savior,
No more than the praetorian to his Caesar.
Not to Father, Not to Mother
O! Nothingness consumes all!'
I've read the whole 8,000 word article, I think what Eliezer means is more like "On Earth, not in Heaven", i.e. you need to prevent bad outcomes, leading a good life is not enough (and is inherently bad if you had a probability of saving everyone), as there's no one and no entity to perform the final analysis that would "reward" that good life.
I'm not being very clear, so essentially.
Code:
|------------------------------------------------------------
|good life -> everyone dies -> no final analysis |
|-----------------------------------------------------------|
|heroic (but more risky and morally hazardous) |
|life -> (maybe) not everyone dies -> (maybe) final analysis|
|-----------------------------------------------------------|
vs Christianity
|-----------------------------------------------------------|
|good life -> everyone dies -> go to heaven |
|-----------------------------------------------------------|
|good life -> not everyone dies -> go to heaven |
|-----------------------------------------------------------|
|heroic (but more risky and morally hazardous) |
|life -> (does not matter, nothing worldly can compare to |
|infinity) -> (maybe) go to hell |
-------------------------------------------------------------
Since hell is infinity bad, any non-zero probability of an infinitely bad outcome is also infinitely bad (side note, this very problem is why Aleph-Null was invented, as a reason to still try to reduce your probability of going to hell if you can't get it down to zero, something a Bayesian literally can't do unless their prior of going to hell was originally exactly zero, something no good Christian would do) no increased risk of going to hell is worth it to be heroic, and leading a good life is the best thing to do. Yudkowsky is arguing against that. He may have written this as an edgy teenager or 20-something, I should check at some point.