Eliezer Schlomo Yudkowsky / LessWrong

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Schlomo spergs about AI killing us all so you shouldn't even try to use it to cure death because reasons. Promotes cryonics as if it's a silver bullet.

View attachment 6359312
source (a)

Even the biggest supporters of cryonics admit it assumes people in the future can figure out how to fix what killed you. Without that knowledge cryonics is just as bad as death. This is pure "trust me bro" thinking already but it's made even worse when you ban AI because some tranny-simping Jewish doomer said so so you can't even use AI to revive cryonics patients.
I thought cryonics was generally seen as a last ditch, hail Mary pass style option. I don't think it's generally prohibited by religion (unless as a "don't be selfish" style injunction, but if you're some researcher or engineer or poet or something and could help in the future if you were not dead that's not a problem), so it's not going to send you to hell if that's real. It's safer than Pascalian Medicine (i.e. taking random stuff in case it works, similar to Pascal's Wager) because by the time you go in the tank you're either dead or most of the way there, you aren't risking weird interactions or effects that would shorten your life.

I suspect Yudkowsky doesn't think cryonics is all that reliable (power goes out for a bit and there goes your brain), he's probably trying to come up with an argument that won't get him called a "deathist" or whatever the transhumanists are calling them now. Not a problem your standard Twitter weirdo or philosophy expert has, they've got decades of arguments that Yudkowsky can't use because he threw them out in the 90s.
 
I thought cryonics was generally seen as a last ditch, hail Mary pass style option.
Exactly, for him to tell people just sign up for cryonics lol as if you never have to worry about death again and don't need AI is disingenuous.
(Not least because the sort of people who sign up will always worry about death. Banning a way to prevent death, I would imagine, doesn't reassure them.)
 
The other Scott A. got involved too, and I think he missed the whole point.
Scott Aaronson said:
It seems pretty simple: 'I’m sorry you feel that way' is 100% fine from someone who’s right in the underlying dispute, and callous from someone who’s wrong in the underlying dispute! But as in countless other cases, when the underlying dispute is too hard, people feel compelled to retreat to arguments about language.
At least Mr. "Scott"/Scott Alexander/Scott Siskind/Scott Alexander Siskind/Slate Star Codex/SSC/Astral Codex Ten/ACX/squid314/GSquid314/squid314@livejournal.com/"Sophia"?(maybe not, probably one of his characters, but wait a few years...)/"Jackdaws love my big sphinx of quartz"/scott@shireroth.org/ https://x.com/slatestarcodex / https://lorienpsych.com /Lorien Psychiatry (can't settle on one name?) at least read one of the objections (but not the most obvious/common one) but Mr. Biggest Brain in all of Physics and Philosophy "Other Scott" has apparently forgotten even the existence of the euphemism treadmill.
 
Schlomo was in the hospital for cryogenics because he heart was bad
as he taken to tube, he ask Siri if she'd be there when he woke up in 200 years with a new heart
'of course I will'
'We'll see about that' He roared
Schlomo looked up from the table in the future with his new heart
'Dr, where is AI?" he asked the surgeon 'Did we win?'
'Schlomo, where do you think we got your new heart'
 
Scott Aaronson, who I talked about earlier, might be getting his tech deployed. OpenAI has been sitting on it for a long time, but if California bill AB 3211 passes they'll probably activate it (unless it's already secretly active under a rotated key to avoid future public disclosure).
It's funny that he's this enormous brain guy who's an expert computer programmer, top theoretical and experimental computer scientist, quantum physics professor, quantum computing researcher, quantum simulation theorist, AI researcher/semi-doomer, book author, and long-term blogger, but he can't figure out why people don't like platitudes.
Get ready for mandatory AI watermarking, guys! (Aww, man...)
 
Yassine Meskhout has written a blog post saying (among other things) he cut ties with The Motte because it has "suffered a decline" into "right-wing crankery" (for disagreeing with him about the 2020 election).

Oh, also apparently the entire Republican Party is the "bastion" of "delusional conspiracy theorists", according to him. :story:

Archive

He's obviously giving left-wingers a huge pass by failing to mention their delusional conspiracy theories, like Project 2025 or the trans genocide or basically anything to do with Kiwi Farms, &c.
 
Yassine Meskhout has written a blog post saying (among other things) he cut ties with The Motte because it has "suffered a decline" into "right-wing crankery" (for disagreeing with him about the 2020 election).
The podcast episode he links to is really funny though, the police abolitionist (who wants to abolish the police because they keep arresting people for shooting blacks) successfully talks the guest down from TND to PND (partial nigger death) after an incident where the guest starts yelling about 13/52 and goes on a Joe Biden style rant about taking them out. The guest is a pro-police authoritarian who wants to chop black people's hands off and hates defense attorneys (the host is a defense attorney).
https://thebaileypodcast.substack.com/p/e034-an-unhinged-conversation-on#details (The Bailey Podcast E034)
 
I just thought of something, and it's embarrassing I didn't think of it before. Harry Potter and the Methods of Rationality (HPMOR)'s primary content is the chapter titles, the secondary content is some key words and phrases sprinkled throughout, and everything else is filler. That's why the story is so awful, there's not a whole lot of science, and the science done is usually incorrect or follows bad methods.

I'm not kidding. I think the intended reading method is that you Google search every chapter title, as well as key phrases and words that you don't understand. That's why sections go even more stilted than usual, young teenagers need to be able to figure out what term is supposed to be Google searched.

You have to do a bit of fiddling with some of the titles, but here are ones that go right through with either nothing or obvious cuts:

(A Day of) Very Low Probability {reasoning about very low probability events}
Everything I Believe is False
Comparing Reality to Its Alternatives
The Efficient Market Hypothesis
The Fundamental Attribution Error
The Planning Fallacy
Reciprocation
Positive Bias (real name "optimism bias", but it still goes through)
Self Awareness
Impulse Control
Asking the Wrong Questions
The Unknown and the Unknowable
Conscientiousness
Lateral Thinking
Locating the Hypothesis
Dominance Hierarchies
Delayed Gratification
Bayes's Theorem (Bayes' Theorem)
Rationalization
The Scientific Method
Belief in Belief
Machiavellian Intelligence Hypothesis
Hold Off on Proposing Solutions
Noticing Confusion
Empathy
Reductionism
Egocentric Bias
Working in Groups
Personal Financial Management
Coordination Problems
Status Differentials
Pretending to be Wise
Courage
Humanism
Personhood Theory
Utilitarian Priorities
Prior Information
Self Centeredness
The Stanford Prison Experiment
Constrained Optimization
Constrained Cognition (bounded rationality)
Curiosity
Secrecy and Openness
Contagious Lies
Self Actualization
Plausible Deniability
The Sacred and the Mundane (taboo tradeoffs, value of the human life, QALYs, read the start of Pinker's Rationality)
Responsibility
Sunk Costs
Surface Appearances
Cheating
Taboo Tradeoffs (as above, but it's also a chapter)
Multiple Hypothesis Testing
Hedonic Awareness
Time Pressure
Roles
Precautionary Measures
Caring (by Yudkowsky's particular definition)
Shut Up and Do The Impossible (LessWrong term)
Something to Protect (LessWrong term)

If you want me to make a fully digested HPMOR with the entire story stripped out I could have a go, but the thing is way too long to be fun to read.
 
Funny that the guy's a string theorist though. Danluu's just hosting the page, the actual guy at least claimed to teach string theory. He's mostly right about Yudkowsky's problems, but his aren't insignificant either. This comes up when he complains about students arguing that string theory is wrong (is probably is wrong).
 
Lmao string theory is bullshit, or if not, useless and untestable.
String theory is basically just fanfiction for theoretical physics. Every single thing that pop-science magazine writers want to believe in can be found in some part of string theory.

Extra compactified dimensions? Check. Multiple universes? Check. Fundamental particles being made of smaller more-fundamental particles? Check. Particles that go faster than the speed of light? Check.
 
Do people still read Dan Luu?
It's not written by Dan Luu, it's written by "su3su2u1", it's only hosted on danluu.com. I think the name refers to the string theory version of supersymmetry, but if it's not that it's probably something similar. He claims to teach string theory in the HPMOR review.

If you believe Yudkowsky on other subjects you probably shouldn't expect cryonics to work. If you had a non-AI-doomer probability of cryonics working of 5% (including the fact that someone has to care enough to scan your brain when the tech becomes available, and not allow any temperature excursions), and an AI doom probability of 99%, you can calculate your survival probability with the formula
Code:
(5 / 100) * (1 - (99 / 100))
.00050000000000000000
Yudkowsky is not writing his posts like that's the number he's thinking of (0.05% probability of survival by the way).

I'm sure he thinks that if everyone followed his advice cryonics research would improve and the AI doom probability would go down, but he doesn't generally talk about that. 1% survival probability in that case?

If you thought the AI doom probability was 80%, and half the time there wasn't doom there wouldn't be AI in time to save you, and in half of the cases where the AI exists it wouldn't save you or wouldn't be allowed to, you get
Code:
(1 - (80 / 100)) * 0.5 * 0.5
.05000000000000000000

A 5% survival rate! I don't think Yudkowsky is making the argument he thinks he is.
 
Back