Eliezer Schlomo Yudkowsky / LessWrong

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
It's hard to do because Yudd clearly just ran his poem through google translate, so it's only the literal meaning of words.
"Writing" in Latin when you don't know Latin is just fucking embarrassing. It is the ultimate in midwit shit, just as Yuddo is the NIST sample of what a midwit is. To someone who can't even read Latin, all it means is "look how smart I am for knowing Latin" and to anyone who does know Latin, all it means is "hurr durr look what a retard I am pretending to know Latin."
 
all it means is "hurr durr look what a retard I am pretending to know Latin."
I'm convinced he did it just to give it religious vibes. There's a lot of words you can use for 'rescue' and 'salvo' isn't the first translation on google or really what anyone would use devoid of context. He picked that on purpose. Not to give Yuddo too much credit, because I know he doesn't understand Latin sentence structure, but he probably worked 'nec mater nec pater' to sound prayer like. This is Deep Atheism after all.

Maybe. Ask @Vecr. I didn't read all that, I just skimmed the article and read his posts and intuited what was meant while drinking, as Nick Land would want.

I fully support dog Latin in the circumstances of a Chorus busting in during a climatic battle tho. I don't care what medium. Vidya, Anime, Power Metal, what ever. Books should have it. You inject that shit directly into my dopamine receptors.
 
Last edited:
all it means is "hurr durr look what a retard I am pretending to know Latin
I have a feeling he wrote it quite a long time ago, and yeah I think it was supposed to sound religious as well. Maybe he was on a meme theory kick back then.
 
  • Like
Reactions: Markass the Worst
No, stalker child, it is you who stands in the way of the Glorious Transhumanist Future.
1706287001348.png
 
Unlikely he's taking a page from The Culture Series where "aligned AI" called Minds basically run a Fully Automated Luxury Gay Space Communism society. The novels are good and I recommend them.
The novels are fine if you are into that kind of thing, but the actual society of The Culture is horrible. Obviously the author of the books is a good enough writer to hide it, but if you read carefully:
  1. What the books call "humans" clearly aren't.
  2. The language the "humans" speak was artificially created by the Minds to prevent them from thinking in ways the Minds don't like.
  3. The "humans" are genetically engineered (and presumably inserted with micromachines) to be able to experience much more pleasure than a real Human or an unmodified "human". See Nozick's utility monster, and in addition all the time-wasting on pleasure is almost certainly making the "humans" less likely to think critically about their situation or really think much about anything serious at all if there are any bad outcomes involved, as thinking about bad things reduces pleasure.
  4. The "humans" clearly have no ability or desire to audit their simulated worlds for S-risk.
  5. The "humans" don't appear to care about S-risk at all, despite that being a major justification for the Minds expanding their sphere of influence (expanding The Culture), likely making the AI systems not actually aligned (but only pretending convincingly enough for genetically- and language controlled "humans").
  6. The "humans" of the special forces division ("Special Circumstances"), the ones who should theoretically be the most intelligent and capable, put no effort into limiting the simulation detail of the Mind's predictions, even in situations of war and mass genocide (in the billions, apparently, not that I've read the book), something that should cause a very high expected suffering value in the world model of any rational Human (I mean a real human this time, from Earth), even a Human with a very high prior for the trustworthiness of the AIs, due to the sheer scale of suffering represented in the situation that is necessarily being simulated for the AIs to make high-quality decisions.
More things to read about this subject: "Against the Culture" https://www.gleech.org/culture ( https://archive.is/gv0lG ), "Why the Culture Wins: An Appreciation of Iain M. Banks" ( https://archive.is/XEvNk ), "The Matrix as Metaphysics" http://consc.net/papers/matrix.html ( https://archive.is/Wk6LK ) (for something the author could have read and pretty easily figured out that what he wrote was bad)
"An easy formula is that you wouldn’t want to live in anyone else’s utopia.", yeah, tell that to the TESCREALs who keep writing the same one over and over again, and expect everyone else to want it too, to the point I have to tell people to stop reading various things because I already know and they already know what happens and that 500 pages is not worth it.
Edit: to be clear, when I say "human", the "humans" in the Culture novels are not even from Earth. They're just similar enough to stand in if you imagine the books as "translated" from the AI constructed "human" language.
 
Last edited:
Tagging in @Vecr Because the quote function is broken again.

Basically, the Minds are the high-tech version of Eywa and the same copypasta about stagnation applies.
 
  • Thunk-Provoking
Reactions: Vecr
I'm just saying it sounds very objectivist-y. It wouldn't surprise me if Yudkowsky was on the John Galt train.
I remember reading a lesswrong article "the problem with Ayn Rand" around 2010, sadly can't find it now. The impression I got was that he feels himself to be too good for Objectivism.

Re: Pascal's wager.
You can prove that infinitesmall probabilities are equal to 0 with the "0.99999...9 = 1" proof.
 
You can prove that infinitesmall probabilities are equal to 0 with the "0.99999...9 = 1" proof.
I have a feeling that's not how that works, but maybe someone who knows the correct math can chime in. In the original Pascal's wager you are multiplying by infinity, right?
The impression I got was that he feels himself to be too good for Objectivism.
Probably. Also, it has the wrong utility function, and as far as I can tell, so does EA (Effective Altruism) in his view.
 
I remember the time this guy was actually acknowledged by Rowling. There were news articles about it (yes (:_(). Of course, this was during a time that HP was still at the height of popularity (even after 15 fucking, goddamn, Jesus Tittyfucking Christ years), and everyone was still in college (hell). She was apparently so impressed by his "work" that it gave him a massive ego boost. Said that he "introduced ideas she had no idea were possible (maybe because she's a malignantly ignorant bitch)". To think this guy had a thread here.
 
I remember the time this guy was actually acknowledged by Rowling. There were news articles about it (yes (:_(). Of course, this was during a time that HP was still at the height of popularity (even after 15 fucking, goddamn, Jesus Tittyfucking Christ years), and everyone was still in college (hell). She was apparently so impressed by his "work" that it gave him a massive ego boost. Said that he "introduced ideas she had no idea were possible (maybe because she's a malignantly ignorant bitch)". To think this guy had a thread here.
~None of the ideas in the book are original (Yudkowsky's work on Functional Decision Theory (FDT) is essentially unused), but it's not like Rowling read E. T. Jaynes before she wrote her books. Maybe they would have withstood common criticism better if she had, but mostly because he was not very well known outside of physics in the 1990s, so it would have been strong evidence for her intelligence. Realistically though, the CIA world factbook is what someone should have given her.
 
Caught this and thought of you guys.
View attachment 5842328
(https://www.youtube.com/watch?v=vv3RFz8SecQ in case the embed stops working again)
Oh no, I've been found out. Actually though, I think there is a connection there, with many philosophers who ended up working on cognitive science and thinking getting started working on AI and/or robotics. Probably the most obvious is the cluster around Douglas Hofstadter and his old artificial intelligence research group at Indiana University, including David Chalmers and Daniel Dennett.
 
Double post, but Bostrom's research facility shut down yesterday.
petrov_seminar_room.png
F720jsaWAAQSOmE_stanislav_petrov_chud_meme_we_re_picking_up_incoming_missiles_no_nothing_ever_...png
Obviously featuring the Petrov Meeting Room as shown here, but also the Arkhipov Meeting Room.
arkhipov_meeting_room.png

Some of Bostrom's employees were influential Lesswrong posters, though I don't think he was.
This organization was the origin of the Overcoming Bias blog.

Obligatory AI generated song:
View attachment 5916519
The Fooming Shoggoths, "I'm a Good Bing", FHI At Oxford
(Where FHI = Future of Humanity Institute = Bostrom's organization until it got shut down. Not to be confused with Tegmark's FLI, the Future of Life Institute)
eric_drexler.png
Eric Drexler, "father of nanotechnology", features in Yudkowsky's fan fiction.
fhi_bingo.png
A partially filled in bingo card.
doomers_dooming.png
Doomers dooming.
fhi_keep_calm_and_reduce_x_risk.png
Keep calm and reduce X risk. (extinction/existential risk)
fhi_halloween_2014.png
Halloween party. Is this the wrong way up?
 
Last edited:
Back