- Joined
- Jan 13, 2018
Autists aren't good at reacting to changed circumstances.Why's this sperg threading his shit when he has the longer tweets from Twitter Blue?
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Autists aren't good at reacting to changed circumstances.Why's this sperg threading his shit when he has the longer tweets from Twitter Blue?
The (((lex friedman))) school of confronting reality. What's that? People that aren't trooned out loser autists actually see through bullshit? No way!Are you not in the LessWrong echo chamber and/or disagree with Yud? Tough luck, you're getting blocked. Featuring Aella.
View attachment 4968554
source (a)
Yud was active in the right mailing lists many, many years ago when the internet was much more exclusive and managed to convince other nerds that he was smart and worth listening to. He built from there and that's why he never had a real job and has spent most of his life essentially grooming a nerd cult.How people take this absolute loon seriously is incredible.
He's terrified that computers will end up smarter than he is. Tough shit for him, because that ship sailed a couple years ago. He is also under the delusion that he is smarter than other humans, but any human who can manage to serve fries and onion rings on time is already smarter than he is too.The more I read about this guy in this thread the more I am convinced that he's reaching some previously unknown heights of cringe computer nerd faggotry, and I'm a huge computer nerd faggot. The over the top dramatic, theatrical concern about computers being too smart, expressing panic in grandiose terms, his constant obsession with neologisms and coining new gay terms, it's all way too much. The definition of a pseudointellectual.
Someone should make a full @AnOminous bot that is just basically me, and then I can retire. It might even do a better job at being a completely retarded faggot than I do.I really want to steal a LLM and train it solely on kiwifarms posts
I welcome null’s basedalisk
This is literally and unironically Clott Adam’s plan to live forever. I think he has thought it through LESS than Schlomo hereSomeone should make a full @AnOminous bot that is just basically me, and then I can retire. It might even do a better job at being a completely retarded faggot than I do.
please never lose this screenshot everScott Locklin wrote a good blog post a couple days ago (archive) covering the hysteria of LessWrong types that AI will end the world. He included this screenshot of Eliezer doing the classic pedo argument of distraction by distinction between pedophilia and "ephebophilia" as well as the horrifyingness of the preceding paragraph, which I thought would be good to post here:
View attachment 4898904
And yes, this screenshot is real of course. (archive) On a post about a "transhuman pedophile" (archive) no less.
Why did he have to include the line about ephebophilia.Scott Locklin wrote a good blog post a couple days ago (archive) covering the hysteria of LessWrong types that AI will end the world. He included this screenshot of Eliezer doing the classic pedo argument of distraction by distinction between pedophilia and "ephebophilia" as well as the horrifyingness of the preceding paragraph, which I thought would be good to post here:
View attachment 4898904
And yes, this screenshot is real of course. (archive) On a post about a "transhuman pedophile" (archive) no less.
Fug, guess it's time to make the Butlerian jihad look like a fucking joke.Yes, that's called transfer learning and that's what language models do to generalize their knowledge of the world.
Let's do what Schlomo is too lazy to do:The again, he's too busy flipping out over the "self-awareness" of GPT-4:
The truth is somehow stupider than that. If you actually read the posts he was quoting (which he clearly didn't), ChatGPT doesn't really compress/decompress text at all, not like a human would think of it. The first post is a long-winded description of a prompt for ChatGPT to simulate a MUD environment, complete with fake human characters and a simulated story that is mostly sparate from the player's actions. The prompt includes specific instruction to simulate the commans "," for progressing the story with no other context and "help" for bringing up a command menu (in ChatGPT).Let's do what Schlomo is too lazy to do:
View attachment 4986068
Wow! Incredible! But why did compression make the message longer?
View attachment 4986084
Here's what's going on: within the same conversation, ChatGPT sees all the previous messages, and already knows what the original message was. It doesn't really compress or decompress anything, it pretends to. It can "recover" the original message because it can see it.
If you start a new conversation, the context of the previous one is lost, and the new instance of ChatGPT of course says it's gibberish, because it is. I can't believe anyone can still claim that Schlomo is above 90IQ after these messages. It's clear he not only doesn't know the first thing about AI, transformers, or machine learning, he doesn't even bother to verify it himself. He's no authority on anything, he's a niche internet celebrity with a weird obsession.
And obviously no need to say that the model doesn't have an ego, the you/me distinction is just a way of interfacing with it. It doesn't mean it has any internal state or thoughts, it can understand roles in a conversation and can complete text. It doesn't have an "I", rather, it knows how to complete text of a participant in a conversation, and ChatGPT's user interface is built this way to make it easier for humans to understand. That doesn't mean that the model's identity is that of the conversation's "assistant" participant. Schlomo is literally failing a mirror test.
Exactly, and then Yidkovsky says something like this:The best thing they've proven is that ChatGPT knows which key words are important and how to make shortened versions of sentences. Which was the entire point of the program! Of course it knows how to shorten things!
He doesn't understand how embeddings work. Word2vec is a 10 year old technology. What a useless kike, and he's got the audacity to present himself as some kind of an authority on AI. He's been left in the dust before he even started his research institute larp.Like. It found a short sentence with, I assume, embedding similar to the long sentence. How does it KNOW? It doesn't have access to the encodings!
Ok thanks. This is literally the definition of handwaving, when a techbro waves his hand in the air to gloss over very important details. But thanks for the information.How would we die? The example given of how this would happen is using recombinant DNA to bootstrap to post-biological molecular manufacturing. The details are not load bearing.
Ah yes, because it's so much more reasonable to risk a nuclear exchange over something quite harmless in comparison. Fuck off either way because this is just "I don't want to nuke countries, but if they keep building GPU clusters they'll force my hand" and pinning the blame for the nuclear strike on countries instead of, well, the person who ordered the strike.Yes, a lot of people jump straight from ‘willing to risk a nuclear exchange’ to ‘you want to nuke people,’ and then act as if anyone who did not go along with that leap was being dishonest and unreasonable.
If you've gotten into a place where you're enumerating basic libertarian/anarcho-capitalism/social contract political theory and what the difference is between police having guns and mafia having guns, something has gone deeply wrong here.I continue to urge everyone not to choose violence, in the sense that you should not go out there and commit any violence to try and cause or stop any AI-risk-related actions, nor should you seek to cause any other private citizen to do so. I am highly confident Eliezer would agree with this.
I would welcome at least some forms of laws and regulations aimed at reducing AI-related existential risks, or many other causes, that would be enforced via the United States Government, which enforces laws via the barrel of a gun. I would also welcome other countries enacting and enforcing such laws, also via the barrel of a gun, or international agreements between them.
It's not that they "cares not for any doom risk" it's that they see this whole thing is a fucking sham and the doomsday fearmongering is just doomsday fearmongering. No GPU clusters are going to spin out of control and delete the world. It's a far fetched idea with several propositions chained together that just makes no sense. Good fucking luck trying to get China on board, it won't happen and the alternative to an agreement is risking literal war for no good reason.So: Even if the CCP is purely selfish and cares not for any doom risk, as long as we can detect their defection it seems to me relatively easy to make it in their selfish incentive, given the incentives they care about, to play along.
If they send Schlomo Shekelstein here to eternal torment they're way BETTER than nukes. I for one welcome our AI overlords, and think they will bring us an autist-free utopia.In any case it appears that Schlomo thinks AI is worse than nukes so this sort of distinction is pretty fucking meaningless:
Thank god there are other people who notice this. It feels like everyone has gone fucking insane, including people who, unlike ol' Yud over here, have an actual education and really should know better. The thing is a glorified text completion tool.The very tweet that somehow "proves" ChatGPT is more than a text string completer and has some measure of self-awareness...instead proves that it isn't.
It's embarrassing that people view this idiotic asshole as some kind of expert when he's just a gigantic autist freaking out about a slightly more advanced chatterbot.Thank god there are other people who notice this. It feels like everyone has gone fucking insane, including people who, unlike ol' Yud over here, have an actual education and really should know better. The thing is a glorified text completion tool.