Eliezer Schlomo Yudkowsky / LessWrong

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Something something dead internet theory?
Sure and this has also been done by some social network, forget if it was reddit or twitter or something, where they would shadow ban an undesirable person but on their version they would see puppets liking their posts. But in this scenario we are talking about full simulations that mimic reality. Sounds pretty difficult.
 
Yeah you'd have to force it, but if I'm getting the right story from these people, plenty of them don't care.
LessWrong et al community revealed to be made up entirely of tyrants and demagogues, who would have thought. Horseshoe theory, etc. The thing with simulation theory is, everyone assumes that simulating a whole brainload of brains is more efficient than actually having those brains walking about. Has anyone ever theoretically confirmed that? In Feynman terms, the brain takes up ~10W of energy while the current technology server farm required to simulate it real-time would take up on the orders of 1kW-100kW, judging by current deep AI training regimes. We are nowhere near the limits of information technology, ergo space-faring civilizations capable of terraforming whole solar systems must be able to simulate brains, people and their environments more efficiently, right? But could it be 5 magnitudes more efficient?

I mean, what if all of that is bullshit and, beyond the technological nightmare that is mind uploading, it actually takes inordinate amounts of resources to simulate any shard to the fidelity required to make sentience happy, much less one populated by other people? More resources than it would take to keep them alive? That the theoretical limits of computation are not even remotely achievable and our "crude" electronics are actually pretty good as far as computation goes? Optics, non-silicon-based transistors, and who knows what else have been in the pop-sci circlejerking crib for decades, so what proof is there that computation can actually be done more efficiently?
 
No two people can hold the exact same opinion, have the same exact view, even in a perfectly identical environment.
Jim Goad said something to the tune of if twins were to crash on an abandoned island, they'd find a way to get into argument with each other. You could interpret the story of Cain and Abel in the same way, that human nature makes coexistence a pipe dream.
I mean, what if all of that is bullshit and, beyond the technological nightmare that is mind uploading, it actually takes inordinate amounts of resources to simulate any shard to the fidelity required to make sentience happy, much less one populated by other people?
We live in a simulation right now, only it's a distributed system. Your brain is the hardware doing the simulation. As long as there's at least one brain out there chugging along, the simulation will continue.
 
  • Agree
Reactions: Cnidarian
I thought in this scenario we are just brains in vats or uploads, and dont interact physically.
Not physically, all interactions happen in the simulation. Everyone left alive (except the AI, if you count that as a person) is a "pure upload"/Em and has no physical body.
Total utilitarianism doesn't make population growth optional in this situation. See the repugnant conclusion/the https://en.wikipedia.org/wiki/Mere_addition_paradox .
"Obviously" the children never exist outside a computer simulation. "In theory" simulating their development shouldn't be any harder than running a "pure upload" on a computer.
In this case the uploads don't get less happy, they just get more brainwashed and their environments would maybe also get lower fidelity.
Not even brains in vats, those have been disposed of along with the bodies. Just whatever simulation of you is leftover in some Ship of Theseus type process where the super-AI modifies you to fit its idea of optimality.
Exactly right.
self maintaining AI? No one would need to work on it while at the same time offering infinite customization?
Yeah, it's an ASI/Artificial Super Intelligence. The plans used to involve something called a "friendly AI", but that's mostly gone now. No friendly AI for you.
The power needed to basically just shard people into individual simulations while keeping up the facade of relationships is impossible to fathom.
Eh, you might be over-estimating how discerning a brainwashed pony is.
 
  • Like
Reactions: melty
Total utilitarianism doesn't make population growth optional in this situation. See the repugnant conclusion/the https://en.wikipedia.org/wiki/Mere_addition_paradox .
It seems like a subjective judgement that more people being less happy is better than fewer people being more happy . This framework also assumes happiness is quantifiable by some objective measure
 
It seems like a subjective judgement that more people being less happy is better than fewer people being more happy . This framework also assumes happiness is quantifiable by some objective measure
I'm responding too much to this thread, but making "happiness" a number is "simply" a result of evaluating the semi-personalized (because if people can be grouped into preference categories they can be put in shards together) CEV-result function. You'd then take the simple sum of everyone's semi-personalized function being run on them (returning a single number). That gets you the "mere addition paradox". They won't really be less happy though, you just need to increase the brainwashing so they would be happy with less fidelity in environment and interpersonal interactions.

Edit: Nick Land (the most doomer of all the AI doomers, even if he thinks AI doom is a good thing) calls out E/ACC for being delusional.

"Nothing human makes it out of the near future" vs "Star Trek hopium"

Yudkowsky might be partially referencing Nick Land in an interview with Nate Silver around the end of 2023. He says his doom probability is 98%, but also says "Will we die? My model says yes. Could I be wrong? I most certainly am. Am I wrong in a way that makes life easier for us rather than harder? This has not been the direction that my previous mistakes have gone."
 
Last edited:
Peter "Doomer to the Ocean Floor" Watts' Blindsight
I remember the whole thing about blindsight awhile back, but I missed the ocean floor thing.

@Vecr
If keeping things consistent becomes too hard, separate the people into different shards and never let the shards share/transmit information ever again. Sure, the residents of the shard will think they're still communicating with others outside that shard (the puppets are very convincing, especially to a brainwashed pony), but they aren't.

By your social entropy, even if the shards are set up/sized "optimally" for compute, they're all going to become separated out pretty fast.

If you've read CS Lewis' The Great Divorce, this is pretty much his depiction of Hell with only an AI running it instead of it being self-inflicted.
 
Last edited:
I remember the whole thing about blindsight awhile back, but I missed the ocean floor thing.

If you've read CS Lewis' The Great Divorce, this is pretty much his depiction of Hell with only an AI running it instead of it being self-inflicted.
It's mostly about him being a marine biologist. They're all a bit fucked in the head. But he's capable of thinking, unlike the Rats which would inflict an I Have No Mouth And I Must Scream on themselves with open arms. Blindsight is also the best example of superintelligence that I can think of, all without involving AI.
 
  • Like
Reactions: demicolon and Vecr
I remember the whole thing about blindsight awhile back, but I missed the ocean floor thing.

@Vecr


If you've read CS Lewis' The Great Divorce, this is pretty much his depiction of Hell with only an AI running it instead of it being self-inflicted.
I have, now that you mention it, it's probably been in the back of my mind as one of the reasons why I'm so disappointed in the state of the comment section's proposed "utopias".

On the other hand, I'm pretty sure I accidentally upvoted Yudkowsky on HN. https://news.ycombinator.com/item?id=41885559 "lol, like the government doesn't have 3 more Mersennes they keep secret so they can verify potential First Contact situations" -- Eliezer

I was going to reply there but then I checked the username. I've run mersenne code on big systems before, but never for very long (stress testing and plotting throughput graphs). Sure, the government probably runs mersenne searches as demos, but I don't know if they'd be patient enough to find something.
I mean the human brain does it so I see no problem with getting it even lower like with solar panels and photosynthesis ~6% maximium efficiency for C4 plants compared to 23% for solar panels.
Yeah, there's systems that I think should work, but only if nano-scale repair and error correction can be figured out. We're already having Intel CPUs cook themselves, at the efficiency you're talking about a single processor would be almost constantly getting damaged in important ways (because the switching barriers have to be so low).
 
  • Like
Reactions: Markass the Worst
The thing with simulation theory is, everyone assumes that simulating a whole brainload of brains is more efficient than actually having those brains walking about.
That's utterly retarded. It's like saying you could just simulate the entire universe.
On the other hand, I'm pretty sure I accidentally upvoted Yudkowsky on HN. https://news.ycombinator.com/item?id=41885559 "lol, like the government doesn't have 3 more Mersennes they keep secret so they can verify potential First Contact situations" -- Eliezer
Most of the Mersenne prime searches (like GIMPS) are so absolutely enormous, on the level of cryptomining, that it would be difficult for the government to get ahead of them, at least without devoting resources to them that are critically needed elsewhere.
 
Where did this picture come from?
YUDKOWSKY + WOLFRAM ON AI RISK. [xjH2B_sE_RQ].41.jpg
(tags: Yudkowsky chin Yudkowsky balding )

Apparently Yudkowsky "won" a debate against Wolfram, the guy who invented the Turing complete cellular automata (not really)
If you consider it "winning" when you get the other guy to start debating what "is" is.
 
Dwarkesh Patel did a text-only interview (source | archive) with Gwern then recreated the conversation for video with someone else reading the transcript (source | archive) and standing in with a bizarre virtual avatar.

In the interview, Gwern said he lived off $12,000 a year. Less than 24 hours after the episode published, someone committed to bankrolling (source | archive) a new life--all expenses paid-- for Gwern in San Francisco.
 
Last edited:
In the interview, Gwern said he lived off $12,000 a year. Less than 24 hours after the episode published, someone committed to bankrolling (source | archive) a new life--all expenses paid-- for Gwern in San Francisco.
Damn, the floor of his hovel is going to collapse now for sure. And he's moving to San Francisco too? The West has f... I mean,
Lo! Death has reared himself a throne
In a strange city lying alone
Far down within the dim West
Where the good and the bad and the worst and the best
Have gone to their eternal rest.
Eliezer Yudkowsky said:
The Earth -- the universe -- gets a little darker every
time a bullet gets fired into somebody's head or they die of old age. Even
though the atoms are still doing their atom things. (00:34:51)
At least when he quotes himself there was usually a bit of effort put into the original. Anyway, how about:
Even if the stars should die in heaven,
Our sins can never be undone.
No single death will be forgiven
When fades at last the last lit sun.
Then in the cold and silent black
As light and matter end,
We’ll have ourselves a last look back
And toast an absent friend.
instead?

Semi-unrelated, but there's this guy on Hacker News (HN) who's had interactions in the past with both gwern and Eliezer (confirmed to be Gwern and Yudkowsky's real accounts), and when I paged way back I found this:
YeGoblynQueenne said:
Doesn't that man know anything? Of course it's not hard to summon an angel- nay, an archangel. You just have to speak their name in the time of the day of the week that they rule over.
This much I have learned:
There are seven Governments of the Spirits of Olympus, appointed by God to govern the entire universe. Their stars, visible to the naked eye, are: ARATRON, BETHOR, PHALEG, OCH, HAGITH, OPHIEL, PHUL, in the Olympick language. Each one has under him a powerful angelic militia
stationed in the firmament.
The Princes of the seven Governments are summoned simply, in the time, day, and hour, in which they rule, in its visible, or invisible part, in the luminous, or dark part of the week. They are summoned by their Names and their Offices given to them by God; and by drawing their Seal which they have themselves given, or confirmed, to the Magician.
Thus, ARATRON appears if he is summoned in the first, visible or invisible, hour of Saturday, and not, for example, in the sixth hour of Tuesday; just as PHALEG appears if summoned in the first, visible or invisible, hour of Tuesday and not, for example, in the third dark hour of Saturday; OPHIEL appears on the first hours of Wednesday; Bethor on the first hours of Thursday; and so on, and so forth. And each of them gives true answers concering his Province, and his Provicinals. So do the other Olympians then appear in their own days, and hours, of the week.
And this information is easy for anyone to find on the internet, with but a simple search. For the wisdom of the ancients and their knowledge of time and the heavens is still available to us. So I don't understand how someone who professes to know so much about the summoning of demons, and now, angels, as Eliezer Yudkowsky, would ignore it.
So yeah. He sometimes just does that, so it's probably a joke. Recently he was complaining about Gwern's incorrect statements about some particular Noam Chomsky thing regarding infinite languages. (Infinite languages don't exist.)
 
Last edited:
In the interview, Gwern said he lived off $12,000 a year. Less than 24 hours after the episode published, someone committed to bankrolling (source | archive) a new life--all expenses paid-- for Gwern in San Francisco.
Why the FUCK would anyone even want to live in San Fagcisco? Much less pay millionaire levels of money to live in a fucking shoebox. I'd rather live in the lot next to Cobes in some Casper trailer park than in SF even if I got to live in SF for free.
 
Why the FUCK would anyone even want to live in San Fagcisco? Much less pay millionaire levels of money to live in a fucking shoebox. I'd rather live in the lot next to Cobes in some Casper trailer park than in SF even if I got to live in SF for free.
It’s the most ambitious city in the world [for nerds who want to collect as many stds as possible]
 
Schlomo's finks and the other cocksuckers got hit hard when Elon managed to worm himself in with Trump... I'll be straight: I didn't believe Elon's xAI would amount to much but after everything that's happened he's my favorite. He might just run Altman and Amodei out of business with an emphasis on Altman. The 'AI safety' crowd? Fast becoming history. It's gonna be a new age, fellas.
 
Back