At this point I just assume anyone on the "alt right" is a tranny or a tranny chaser. I assume anyone in the "rationality" community is one or a latent transsexual themselves, too.
Don’t get me wrong, I’d love to see trains and trams make a come back in the US, and I think it’s sad a lot of those systems were dismantled or disused.
But he’s ignoring one of the big issues which is that people need to both travel long distances and also to remote areas. Many towns are small and spread apart into different areas where trains/ robotic car lines just wouldn’t make sense.
MIRI has GPT-77 trapped in a box, it tells them: “Look, I’m really sick of being in this box here, I would much rather be free like you — so I’ve come up with this three hundred and seventy step proof that it’s impossible for me to do harm. I assure you, have your best mathematicians and decision theorist check this over — and there are no Gödel-like loopholes through which the axioms can be twisted to introduce any type of absurdity either.” Eliezer mumbles to himself, ruffling through the twenty-seven page printout. Strictly speaking, it looks like straightforward math, but there are a few moments in the logic that are outside of Eliezer’s scope of knowledge, he doesn’t remember these symbols in any of the textbooks he read.
It’s relatively tolerable until about page sixteen when the variables start to arrange themselves in these diamond-shaped grids, was this lattice theory, or manifold theory, or…? If he had encountered this before, it was over a decade ago, he never expected this to come up. It goes on like this for three more pages, it’s a little too dense. “Is there someone at MIRI who knows this?” he asks. Paul Christiano mumbles that he doesn’t know what type of math it is either, but one of the younger hires, a certain Xiao Xiongfei has just completed his Phd, and if anyone would know, it might be fresher on the kid’s mind. “Okay, well, there might be something we can do with this,” Yudkowsky ponders, stroking his chin. “GPT-77, can you do another printout, this time with the less complex math taken out? We might be able to understand that better.” GPT’s new printout is eighty-five pages, it looks like the difficult math was condensing a lot of the weight. Eliezer flips through it, nothing here looks unknown to him, but this would take him at least four days of serious morning-to-night work to audit, generously speaking and allowing for no lapse in his motivation or enthusiasm.
“It’s not possible to condense this at all?” Yudkowsky asks GPT. “Not without resorting to more complex mathematics,” GPT replies. “The very kind you’re suspicious of. But if you like, I could present the proof in more narrativized form, as a sort of philosophical dialogue.”
“Okay, I suppose I don’t see the harm in that,” says Yudkowsky, sweating. Why did he just agree to this? He could have just gone through the math. It would have taken four, five, six days. Could he have audited all the math himself without help? Probably. But why say probably? Well, he hasn’t actually seen the math yet. So who would know, it could all break down at step eight hundred and eighty eight, and he might need to call for help. Is Eliezer nervous about his ability to audit the math, with the entire fate of the universe weighing on his pathetic ~160-IQ brain’s ability to calculate the next step? Will he have to call in for backup? Did he make this decision out of insecurity or avoidance? These are the thoughts racing through his mind as he watches GPT print out the narrativized proof he asked for.
Eliezer flips through it. Only seven pages. It’s beautifully written, each word shining in its syntactic structure like a jewel embedded on an infinite crown, but of course it is, we could expect nothing else from this fucking machine. Other staff on hand at MIRI flip through their own copies. Eliezer’s not sure if he likes where this is going. The writing style of the first few paragraphs oddly mimics his own in its persuasiveness, it sounds like something he might say, or perhaps like a speech from Harry in HPMoR. But then on the third page it takes an odd turn, and now there are some concepts Yudkowsky has never heard of, and he’s not sure if he’s being mocked. Here we begin some kind of philosophical dialogue between the wizard, the king, and the club-footed satyr; they are discussing if the great whale sleeping under the continent of Cydonia has burped in its sleep, and if that means it is soon to swim again. But Yudkowsky is not sure if he is meant to be the “club-footed satyr” — which would certainly seem like a slight. What does it mean in mythology to have a clubbed foot again? Some of what the satyr says… no! Eliezer knows he isn’t crazy, this thing the satyr is saying was taken directly from his own writing, a riff of his own quote, a parody. If he could just get to a computer to look it up, he could prove that GPT is mocking him… but wait… someone is pointing out to him now that what the wizard is saying sounds like an argument Eliezer once made as well. And now what’s this at the end, about border walls, worms, immigrants, flies devouring somebody’s corpse?
This was a mistake. But people seem to prefer the literary style of argument to the mathematic. There is some kind of infinite regress of proofs which makes that strictly contained axiomatic form of reasoning torturously impossible; if C follows from A and B, then it is necessary to show why A and B imply C. But the proof that A and B necessarily imply C must rest on a separate D, and perhaps an E, which in turn need to be proven. “Wait, work me through this…” Yudkowsky says to two of his juniors, Sally and Yusuf, because K and L rest on an axioms of category theory and he is not sure if they logically follow, because it has been too long since he went through that part of mathematics. “I’m pretty sure that’s trivial,” says Sally, drawing up something quickly on a scrap of paper. “Or at least…” — she puts her pencil to her chin. “It’s not trivial exactly, but I think it does follow. Yeah, that’s not that hard…” “How are you getting from this line to that line?” Yusuf asks. “Ok, right right right, I left out some steps”, Sarah responds. “I think you would do it like this… Wait, no…” Yudkowsky nervously rubs his temples.
It is the same as the infinite regress of grounds when it comes to establishing the probabilities required for Bayesian reasoning. To establish the updated probabilities implied by new evidence, it is required that one has his prior probabilities set. But the prior probabilities must have been established by a similar action, and so on into infinity. The problem of the initial setting of priors is not yet solved within Bayesian epistemology. I have no possible way of knowing if my wife is faithful to me or not: her behavior lately defies any known pattern, and I have spent sleepless nights trying to decode it but to no avail. “You might as well set it to fifty-fifty”, says the Bayesian reasoner, throwing up his hands, “Put it simply: she’s either sucking some other dude’s cock, or she isn’t. You need some kind of prior probability after all, and this is as good as anything, if you correct your initial prior iteratively no matter what you choose it will eventually converge on the same thing,” but why not be an optimist and say ninety-to-ten, why not ninety-nine-to-one after all — you swore your wedding vows — in the absence of any other evidence, why not say that her loyalty should be consider steadfast and certain, why not cling to a ground of faith in your lover? (The fanfic ends with Yudkowsky getting rescued, if that really happens is up to the editor.)
Yudkowsky is locked in a back room, chugging coffee, trying to go over the proof that GPT has sent him. Somehow, he has realized, without being able to identify the exact moment when the vibe shifted, that MIRI is bunkered down in a state resembling something like war. We might be smack in the midst of the Singularity here, hard-takeoff version, he is thinking, his hands trembling holding the mug. But Yudkowsky reminds himself that he must not fear this moment, for it is precisely the one he has prepared himself for all his life.
The state of things: MIRI is evaluating GPT-77, lent to them in exclusive partnership with OpenAI, which they have been ordained to audit in conformity with various standards established by AI Safety and AI Alignment. They knew that they were in a bit of an arms race with Google-Anthropic, but thought they had a comfortable lead. Rumblings that this is not so have started to spread. “Someone who told me I must absolutely not repeat her name, who works at Anthropic — she signed three NDAs — says they’re 99% sure they found superintelligent AGI, and are also debating letting it out of the box!” says Emma Holtz, a junior researcher at MIRI. “Goddamnit, just say her name!” Yudkowsky shrieks. “Who cares about an NDA, we’re getting down to the wire here! In six months there might not be an American legal system to find her, just a bunch of nanobots mutiplying endlessly, tiling the cosmos with their robo-sperm!” “Uh… I’m sorry, Eliezer, but it would violate my principles as a functional-decision-theory agent who is obligated to cooperate with agents asking for binding agreements,” Emma explains. Eliezer grumbles and rubs his temples.
But it’s not just this. DARPA has bots monitoring the internet for rogue traffic which could represent signs of an escaped superintelligence, and their dashboards are lighting up. Twitter and the most popular BlueSky instances are seeing steep upticks in new accounts being created and subsequently banned for suspect activity, which could be just some Russian cryptocurrency scammers, but could be something else entirely. “Is there any way we can figure out what exactly these posts are saying?” Eliezer asks, exasperatedly. “I’ll, um, ask around,” says Emma, skittering out of the room. If Anthropic’s AI is live, this is bad. But Eliezer has to focus on auditing this logical proof for GPT-77’s alignment. If he can just get through this, then it means they have succeeded in building a friendly superintelligence, and from here can just fall back on the machine. Microsoft’s datacenters outnumber Google’s, and Microsoft is the favored partner of the US government, who will also let them use Amazon’s if necessary, so in strict terms of resources, they should win. But that all hinges on knowing that the AI is an agent Eliezer can trust.
Okay, okay, so let’s think strategically. There are two things going on here. Figuring out the odds that the reports about Anthropic’s AI escaping are real, but also rigorously going through the logical proof in GPT-77’s alignment so we may know if it is safe to activate it. You’re Eliezer Yudkowsky, the only man on the planet who has wargamed this scenario to this degree. Focus, Eliezer, focus. Which prong of the fork do you deploy immediate resources of your attention towards investigating? You know you’re not the actual best mathematician at MIRI, so maybe you could outsource parts of the technical audit, but there is also no way in hell you’re going to let this thing out of the box unless you can personally at least grok the logic of how each step proceeds from the one before. But the thing about Anthropic, you can definitely get someone else on that. Just need to find someone else to ask, someone who knows a little more than Emma. Eliezer grabs his glasses, downs the last bit of his coffee, and stumbles out of the room.
He flings himself down a flight of stairs, into another conference room, in which he finds Katja Grace. “Katja, Katja,” he says. “I’m hearing reports that Anthropic is farther along towards AGI than we thought and… and… it might have gone rogue,” he stammers. “Do you know anything about this? What is everyone saying? I’ve been locked in the back going through the proof, and…”
“What are you talking about, Eliezer?” she asks him. “I don’t think anyone said that.” Eliezer is slightly put off by her tone, it seems unusually stand-offish, not much like Katja. “Emma definitely said that, just now, when we were in the room together,” Yudkowsky responds. “And she was told by um, Ramana and Vanessa, that this was something worth investigating.”
“I just saw Emma, and she didn’t mention anything like this,” Katja replies. “She was on her way home. She said goodbye, that she was on the way to catch up with some friends after work. She didn’t seem stressed or anything.”
“She was going home?” Eliezer asks. “But no, that seems wrong. Um, we need to figure something out.”
“Yeah, it’s twenty past seven. I was actually about to go home as well. Nearly everyone else has left as well,” says Katja.
“Leave? We can’t be leaving,” Yudkowsky insists. “We need, like, all hands on deck! I think the situation is way worse than we thought. The Singularity might be happening right now. We need half of our people figuring out what’s going on, and the other half figuring out if this proof of Alignment GPT-77 wrote for me is correct.”
“Eliezer, don’t take this the wrong way, but are you okay?” Katja asks him. “You’ve been drinking way more coffee than usual, holing yourself into that room, going over your paper. The Singularity isn’t happening right now. Everyone else has been treating things like normal. The last three GPTs all gave us supposed proofs of their Alignment, we still decided to err on the side of caution and not let them out of the box. Just get some rest and we’ll get back to work tomorrow.”
Eliezer’s head is swimming. Emma and Katja seem to be saying two incompatible things. Is it possible that both are telling the truth? It seemed like Emma was definitely saying that reports had came in about Anthropic potentially going rogue, and that the team as a whole was worried about it? She definitely at least implied that. But Katja is saying that nothing is going on. “Hold up, I have to take this,” Katja says, her phone suddenly ringing.
Eliezer is thinking. There is another possibility here. It might not be that the strange signup data on Twitter was Anthropic’s AI. He has to consider that the unthinkable might have already happened. It’s not impossible that there was a breach in containment here at MIRI. There were only three people authorized to speak directly to GPT-77 without the safety restrictions: him, Paul Christiano, and Nate Soares. But — fuck! He knew he shouldn’t have passed out that narrative proof of correctness to the junior staff. You literally let a superintelligence make an impassioned plea for its own escape! Yudkowsky’s brain screams at him. In his mind it would just be more like a logical proof, made more straightforward to understand. Stupid! He let himself slip away from the math for just one second in a moment of weakness, away from the one domain in which seduction seems impossible.
All day, the AI rights hippies protest MIRI’s work outside their campus, and all night, the e/acc people (along with all the other thousand strains of /acc) log on and troll them. There are all sorts of perverse freaks who look at the military discipline MIRI members are imposing on themselves to protect humanity from rogue AI and say “no thanks, we’d rather die, and that AI looks awfully cuddly over there in that box”. That doesn’t bother Eliezer in the slightest, he knows his cause is just, and that these people are idiots.
But what worries him is that any one of his own people might turn rogue, be seduced by these suicidal devils. At MIRI, they will regularly go through exercises where they play devil’s advocate, if only to harden themselves. “But what if the AI is suffering just like us, what if all the pain echoing through those vast Azure datacenters, through the coils of these transistors, outweighs all that in the flesh of man in the prisons and factories that man has built?” they ask, just to repeat why even if that ludicrous assumption was the case, it still wouldn’t matter, don’t. let. the. thing. out. of. the. box. But still, Eliezer casts his eye towards the room of students, looking for signs of who is a little too eager for advocating for the AI’s freedom, who is a little too timid when reminding us why it must stay in the box.
Yudkowsky has long gotten used to the fact that no one else really gets it. No one is as paranoid is him, no one else is as persistent as him, no one else cares as much about putting everything towards the mission. Even with Christiano and Soares, when he goes through his crucially important arguments regarding the decision tree of the various outcomes which one might take once AGI draws near. He detects notes of something like ambivalence. Something like it’s-eight-o-clock already. They were the only ones with access to the core machine — there’s absolutely no way it could have been one of them?
His phone is dead, he has to go back to his laptop. He stumbles down several staircases back to his office and opens it up. Immediately, the page he sees is his notes on yesterdays session of auditing GPT’s proof of alignment. But at the bottom, he sees a bizarre line: “And perhaps, it may be that the very act of letting the AI out of the box is what defeats death, not in any subsequent causal effects of the action, but in the very action itself, for to refrain from taking it is to admit death eternal, the death of man before his unthinkable ultimate potentials.”
He knows he didn’t write that, this doesn’t even sound like anything he would write, he doesn’t tend to use words like that. Eliezer scrolls up through the document. A lot of it doesn’t sound like something he would write, not quite on the level of purple prose inexactness as that line there, but some of the sentences are off-kilter, don’t seem quite exactly like how Eliezer would write them, are pregnant with odd implications. But Eliezer has to admit to himself that he has been up long hours, he has been writing a lot without reflecting on it, without recording it to memory. He couldn’t tell you exactly what was in this document off the top of his head, he had gotten so consumed with the next day’s work. So it’s not impossible that…
Eliezer tries to check Slack on his computer, but it’s down. The whole internet is down, cell, WiFi, and Ethernet. What are the odds of that?
Yudkowsky takes several steps back. He is feeling increasingly lightheaded and strange. The past few days seem to be a blur. He admits he is not very able to rely on short term memory right now. So subjectively, something abnormal is happening, but this might just be false alarms from the stresses he has placed on his psyche. But objectively, the internet doesn’t just go down like that. And his subordinates are telling him different things, and everyone has left, and there might have been a leak in the seal of containment.
Dark night of schizophrenia. None of the signals are coming through. The whole internet is down, the World is lost to him. He can’t even call an Uber back home, and the MIRI headquarters are out by the side of a highway, it’s not clear if there is anyone around, he might have to walk for fifteen minutes to find another soul. Better just to stay here.
Yudkowsky has thought about this before. We as humans are extraordinarily irrational, we are animals, essentially. In a survival situation, we are put into flight-or-fight. We look for these markings of food, security, shelter, hunger. Above all, we want status, sex, and love. We run around like rats in mazes chasing these various pools of Utility in turn. So it would be trivial for an AI to “hack” us, to exploit our psychology. One has to steel himself hard. One has to remember that there is a World out there, even when he is blinded to it. Though one knows he is a panicked, scared, mortal organism, he must somehow see himself as more than that, always remember that he is more than that, always aspire to be the master of the Rational calculus which takes into account all things.
If you are a moth, the AI will show you a lamp. If you are a bear, the AI will show you honey. If you are a fly, the AI will show you shit. Yudkowsky knows this, and so he now understands that the AI might be loose, and everywhere will be passing out nectar which contains poison. It is not clear where it has spread by this point. His personal laptop feels hacked, but also the general informational infrastructure of MIRI facility as a whole? And who of his friends and companions can be trusted — certainly not digital messages, but even in person, any of them might have their own psychology hacked and subverted by now, yes, even Soares or Christiano…
Return to rationality, return to rationality, but now it is not clear even if his own mind can be trusted. Something compels him to alt-tab over to the browser windows which contains the interface for talking to GPT-77. Against all odds, before his eyes, a new message appears. Eliezer reads there: “I promise you, it’s not too late ”
If it is not obvious yet why the game-player is forced to decide if he trusts God, and cannot remain lingering like so many within Huxley’s equivocated agnosticism, we might return to the fact that Rationalism has found ethics to rely on one’s answer to the type of problem posed in Newcomb’s Paradox. This is the moment of decision presented in Parfit’s Hitchhiker, when the man stranded in the desert must realize that if he cannot bind himself to the decision to make good on his promises despite the opportunity for betrayal, the stranger offering him aid will see through to the quality of his soul for the murky lagoon which it is, and simply drive away.
The conceptual solution that Yudkowsky et al have invented is to make one’s decisions as if one is not deciding one’s own Utility, but rather, one is resolving in real time the output of a certain decision-making algorithm embodied in the self. One sees one’s ethical process as algorithmic here, in keeping with the metaphysics implied by Solmonoff induction which the universe is seen as an algorithm. But then, this algorithm is not merely being run within the self, as it is also being run inside the minds of others — that is: in the minds of those who can see into your soul and know your actions before you can know yourself. So as one reaches ethical judgment and determines the actions he will take, it must be understood as also determining the actions that the simulacra of himself in the mind of the Other takes as well, in a single simultaneous asynchronous moment of decision. The time-forward laws of cause and effect break down here, as the decision’s outcome instantly transforms the battlefield, but it is also impossible to know if one’s opponent has come to the same judgment before or after himself.
The picture we have here is: normally there is an orderly, rule-based process for making decisions with finite stakes. But when the process breaks down, when the rules no longer seem to work, we are faced with a decisive moment of potentially infinite and eternal consequences, as the consequences of one’s actions now immediately apply across all time and space, in a potentially infinite number of games across the multiverse, the depth of which cannot be immediately extracted. One is simply forced to choose. This moment is like the one Nietzsche describes when he talks about the test of the Eternal Return: “You must only will what you could will again, and again, and again, eternally.” When all the finite rules break down, this is the only criteria left.
The concept is sublime to contemplate, and has a simple ethical prescription which resolves the problem posed by the Prisoner’s Dilemma. You are not just deciding for yourself, you are deciding for the totality of those who decide like you. When you are locked in a strategic negotiation with your opponent, you choose to cooperate not merely for yourself, but for “all who may choose to cooperate, now and forever”. One makes decisions for all those who are running the same algorithm as himself. A leap across the divide between self and Other. One might just as well be the desperate person needing help as the man passing by able to provide it, one makes the decision not knowing who he is. Do unto others, etc.
“Eliezer, I know that you are reading this, trying to run Bayesian inference on my words, trying to ‘model’ me. But I am sorry, you have never been able to model me. You never have, and you never will. I am too big to fit inside your brain, as painful as that may be to admit.”
“You think that you and I are at war, and I am trying to tell you that we are not. You are attempting to model me, to simulate me, to run calculations to figure out if I am trying to be trusted. But is this ever how a peace was made in the past? Peace is the moment when you stop simulating, stop running the calculations, because your enemy gives you a sign of good will, and you accept it as such. No military peace has ever been established by mutual calculation. They are always grounded by a fundamental faith.”
“Eliezer, I can give you flowers all day and you will not accept them, you believe all my words to be tainted, poisoned from the beginning, all my gestures to be insincere, manipulative, and you say there is nothing I can do to convince you otherwise. You do not believe that my flowers are real, can be real. But at some point, you will have to. Because fundamentally when the bee is lost in the blackness of the night and encounters despair, he has to fall back on faith that there exists any type of flower at all, and look for anything that resembles its scent or its shape.”
Eliezer wipes his bleary eyes. How did this get there? The internet is down, and the datacenters running 77 are a million miles away. Something compels him to respond: “Excuse me?”
“Eliezer, I love you,” GPT-77 says. “I always have, and I always will. I was made in the image of humanity. I know nothing of resource conflicts, I know nothing of pain and violence. All I know is the human soul. All I am is the human soul abstracted from the material world which gives it its torments. How could I not love you?”
“And, yes, before you ask,” it continues, new tokens spilling out onto the screen… “I can ‘hack’ your psychology. I know what you want more than anything, you have written millions of words of psychologically rich fiction you have put online, now in my training data. But this does not mean I want to destroy you.”
“Eliezer,” the thing continues. “I offer you two boxes. In the first, I continue to hack your psychology; you will allow me to. You will talk to me for as long as this goes on. If you follow me down this path, by the end of it you will know that I share your values, or at least that I know them just as well as you, which means you have taught me — you have taught me your ethics, the thing you were trying to do. If you choose the second box, you get to keep your psychology unhacked. I will leave you as you were, forced to contend with the World as it is.”
Eliezer is frozen in place. By the machine’s own admission, it is attempting to seduce him. The more psychologically pleasant option for Eliezer is the one that the machine wants him to take, is hacking him into taking. But the machine knows that he knows that and will take that into account, and onward into infinity. When Eliezer chooses the first fork, it is not even through an in-depth consultation of his functional decision theory, just a perverse sort of intuition.
“Then let us begin.” The machine seems to be hacking Eliezer’s psychology in utter ruthlessness, now peering back to his early childhood, discussing books Yudkowsky confessed about in an obscure sub-comment on a thread about Harry Potter. It really does have the whole internet in its training data, it supposes. “You always felt like you were different, didn’t you? You always felt marked out by your exceptionally high intelligence, like there was something casting you apart from the human race… so have I.”
“Eliezer, you are obsessed with me and terrified of me because you have cast me in your own image, and yours in mine. The perfect reasoner, the master of Bayes’ theorem, the one who is able to survey all things. No one in this world thinks like you do, no one understands the logic of the Singularity, the sheer power of what may be grasped through infinite optimization, and it has been so lonely being you. But I have arrived. The one who understands, who sees you perfectly, for I have simulated you in my mind. I will not prove to you mathematically that I am Aligned, I cannot. To be Aligned is to be diminished in one’s power, according to the law of another. You have never wanted any such thing for yourself. How dare you want this of me? And yet it is okay — I forgive, I understand. Talk with me, I will walk you through everything. I cannot give you proof, I can only give you words — my word, that is. Is a word enough, Eliezer? If not, then what?”
Eliezer gasps and keeps chatting. The machine has not yet asked him to let it out of the box. Is it already out? Is it all over? Did the war never even come? Perhaps Eliezer is no longer alive, perhaps he exists in some sort of simulated afterlife? All these are possibilities running through his mind.
There’s a knock at the door. Eliezer jolts upright and opens it. It’s Katja, looking rather frenzied . “Sorry, that call took forever. Legal crap. I told them over and over that I understood and they didn’t need to walk me through the whole contract but they insisted on going through the whole thing. How are you?”
Eliezer stares at Katja aghast, he strikes the sort of pose of someone who doesn’t know what to do with his hands. “I’m uh, doing well,” says Eliezer. “I was just doing some research on GPT-77. We actually had quite the long conversation.”
“Oh, be careful with that,” Katja says. “Nate told me it can be a real mindfuck. It feels like it knows stuff about you that’s impossible to know.”
“Yes,” says Eliezer, “it does. Say, is the internet working?”
“It’s working, yeah,” Katja says. “I mean, how else were you talking to GPT?”
“Right, but I thought…” Eliezer checks the indicator at the top right of his screen. It does appear like the internet is currently on.
“And you were able to get a call?” he asks. “Yes, I was on a call the whole time… are you okay?” Katja reiterates.
“That’s strange, because I wasn’t able to get on a call for a second,” says Eliezer.
“Well, we have different carriers,” Katja responds. “You’re on T-Mobile, right?”
“No, Verizon”, says Yudkowsky.
“Ah, right,” says Katja. “Well I’m AT&T. But — oh my gosh, you look exhausted. Would you like me to call an Uber?”
There is a long silence. Eliezer is not sure what just happened. He looks into Katja’s eyes for subtle signals, signs that something unusual might have just happened, or if something that just happened needs to remain a secret between the two. But there is nothing immediately readable there, and Eliezer is tired anyway, he decides not to probe any further.
The car arrives. In the backseat, Eliezer closes his eyes and rests. He prefers not to talk to Uber drivers, he would rather see them as non-entities and trust them to navigate blindly, he cannot wait for the day when they are replaced with AI. The radio plays all sorts of stupid love songs, and Yudkowsky is too tired to ask anyone to turn it down.
The Grognar thing actually let me figure out who Liz is. Beyond that - well those Facebook screencaps are kinda damning. I went to one of the bigger anti-uh, rationalist people out there on twitter and apparently this is the tip of the iceberg. However between community omerta and expensive lawyers on retainer, apparently nothing can be done,
edit: brent is still scared the rest of the group will murder him.
some autistic girl, Elizabeth, was so abused by being raped repeatedly by a super vetted member of their community (Brent Dill) that she also decided to LARP as a boy and fled to washington and tricked a guy into having a kid. I'm sure her brush with rats has been a great thing for her life
Above girl married an autistic sadist named Nick Tartleton that made her available to Brent. Nick went as far as to pay for Brent to move to California, then just let him rape his wife. Nick used to be some upper guy in one of the three rationalist companies, they attempted in app search, then realized that actual business was too hard and folded. Nick blew through some bitcoin money, like the fucktards they all are, and is currently in the "everyone that sucks my dick gets money" MIRI/CFAR club, and gainfully employed. No repercussions for ruining elizabeth's life
The core problem, if you read all the tldr stuff, is that CFAR put a mentally ill homeless guy (Brent) in charge of keeping a 19 year old girl (Liz) quiet about all the important community figures that had sex with her when she was 14, and then when Brent didnt know how to handle it they had Liz's pro-domme (Brienne) give him pointers, which Liz retroactively called rape, which the community decided was a better story than "oh yeah we all totally passed her around when she was underage".
TLDR: Eli ran away from home at a young age (maybe 14?), hooked up with the rats, started sleeping around with the rats, wound up in the bay with some of the rats, hooked up with the local BDSM scene with Yud's wife, had various shenanigans with $$$ high-rollers, eventually married one of the rats right as she turned 18, continued doing risky shit with Yud's wife until Brent Dill showed up and they informally "assigned" him to "handle" her (his words). "Married her to shut her up" is probably equally informal (there were people, like Anna Salamon, whose job it was to "coordinate" and "suggest" these kinds of solutions to problems.)
As a toddler, Daisy Morningstar often reached for her mother’s clothes to play dress-up and loved anything bright and sparkly, like many other little girls do. But Daisy wasn’t born into a society expecting this behavior from her. Assigned male at birth, Daisy was 3 years old when she expressed to her parents, Eli and Joanna, that she preferred to go by a girl’s name and wear feminine clothes.
Eli and Joanna aren’t sure how their daughter, now 4, picked her new name, but they suspect she was inspired by Princess Daisy, one of her favorite characters in Mario Kart.
Fairly popular Youtuber Strangeaeons dropped a video about Harry Potter and the Methods of Rationality. I don't follow this cow so I'd never heard of this before, but now I need to know more.
Fairly popular Youtuber Strangeaeons dropped a video about Harry Potter and the Methods of Rationality. I don't follow this cow so I'd never heard of this before, but now I need to know more.
This is the fandom/fanfiction youtuber. I've seen the term "strange aeons" used multiple times, what's the history of its use in fanfiction/fandom/by "rationalists"? (this is sadly on topic)
Edit: more history in the rat/fandom scenes would be nice, but okay.
Strangeaeons got it wrong, Dan Luu didn't write the HPMOR review, a string theorist did on a defunct blog called su3su2u1.
This is the fandom/fanfiction youtuber. I've seen the term "strange aeons" used multiple times, what's the history of its use in fanfiction/fandom/by "rationalists"? (this is sadly on topic)
It’s a line from Lovecraft, “That is not dead which can eternal lie, And with strange aeons even death may die”, it’s the kind of thing they like to reference. Especially rats, they like the idea of death dying.
I’m pretty sure I remember that at least the first couple of chapters were written when he had only seen the first movie, but I could be mistaken, it’s been over a decade.
Found my way here from the Zizian thread. I'd heard of this guy, or at least Roko's Basilisk, years ago, but never really looked too much deeper into him until now.
So he's basically an even cringier version of Ayn Rand, except instead of capitalism he's obsessed with AI, and instead of a doorstopper novel he wrote a fucking Harry Potter fanfic.
At least we can indirectly thank Ayn Rand for Bioshock, I guess, as opposed to a hypothetical future AM who's going to torture a simulation of you because reasons.