Eliezer Schlomo Yudkowsky / LessWrong

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
You can prove this by watching ChatGPT play chess (really). Its training data includes reams of information about chess, including games played and all the rules. It produces outputs that are statistically similar to chess moves, but there's no "understanding" of the game rules, or "knowledge" of what its moves actually mean.
That first chess video, man. Maybe LessWrong's onto something with their fear of robots after all. I don't know what recourse we have against something that just spawns infinite resources in and can alter the races of royalty on a whim.
 
He talks for extended periods of time while simultaneously squinting and closing his eyes while displaying as many of his teeth as possible.
I suspect this is sleep deprivation. I remember him talking about having a sleep disorder that he treats with some sort of undisclosed medication, but I can't find it in the immense volume of his blog, I hope someone can find it, or I find it again by accident. He doesn't seem to have this tic earlier, e.g. in this video where he has a discussion with a former MIT professor and minor academic lolcow Scott Aaronson:
(though I have picked a point where he did do the rictus grin, it wasn't combined with frequently slamming his eyes shut).
 
Bald kike Schlomo getting BTFO'd on twitter by a they/them, after grifting lies on (((lex friedman))):


For some reason the direct archive doesn't grab it all:

More complete archive:

Fun parts are

Constant citing of a childhood IQ (which even if true drops incredibly fast for literally everyone once you're not longer a child)

Christiano, used to be part of the MIRI grift but was actually smart enough to get the fuck out of that hellhole and push knowledge forward, totally calls out the bald fat fuck multiple times for being absolutely constnatly out of his element. Hasn't done anything except grift, never will do anything except grift, and he's fucking spergy creepy levels off the charts.

Meme stolen from


View attachment 4952993

How many threads before he does time?

Yud is apparently preying on a single mother. Likely paying her for sex.

It's a matter of months or really a few years before one of the FTX journoscum picks up the MIRI/CFAR/Yudkowsky rock and sees the pedophilia and rape paper trails
 
Bald kike Schlomo getting BTFO'd on twitter by a they/them, after grifting lies on (((lex friedman))):

https://twitter.com/xriskology/status/1642155518570512384
:story: A slapfight between persons of gender and autistic obsessive paranoid schizos. This should end well.
For some reason the direct archive doesn't grab it all:
https://archive.is/GJylC
More complete archive:
https://archive.is/XRloH
It's been like that for a while. Twitter seems to have changed enough to the point that archive.is can't archive everything in a thread.
 
I wrote a post that collated the opinions of AI experts on AI. if you are interested you can view it here.

what is really irritating is that the responses I get are "yes these experts are experts on AI, but they don't know anything about AI alignment, and if they did they would change their view."
Yann LeCun is literally the #1 cited person on google scholar for the term "AI" and in the top 10 for "machine learning." I highly doubt he does not understand AI alignment; they just say he hasn't engaged their views enough or that he's biased because he's chief AI scientist at Meta.

(by this reasoning, the AI alignment crowd ought to be even more biased, as Yann could have a job anywhere while Eliezer Yudkowsky has to my knowledge never had a job outside of working for this nonprofit; he turned 18 in 1997, coded various projects at his parents house until he was 20-21, and founded SIAI / MIRI which he's worked at to this day.)
 
(by this reasoning, the AI alignment crowd ought to be even more biased, as Yann could have a job anywhere while Eliezer Yudkowsky has to my knowledge never had a job outside of working for this nonprofit; he turned 18 in 1997, coded various projects at his parents house until he was 20-21, and founded SIAI / MIRI which he's worked at to this day.)
It's all cope with Yudkowsky.
 
I wrote a post that collated the opinions of AI experts on AI. if you are interested you can view it here.

what is really irritating is that the responses I get are "yes these experts are experts on AI, but they don't know anything about AI alignment, and if they did they would change their view."
Yann LeCun is literally the #1 cited person on google scholar for the term "AI" and in the top 10 for "machine learning." I highly doubt he does not understand AI alignment; they just say he hasn't engaged their views enough or that he's biased because he's chief AI scientist at Meta.

(by this reasoning, the AI alignment crowd ought to be even more biased, as Yann could have a job anywhere while Eliezer Yudkowsky has to my knowledge never had a job outside of working for this nonprofit; he turned 18 in 1997, coded various projects at his parents house until he was 20-21, and founded SIAI / MIRI which he's worked at to this day.)
That dude is so strange. Who the hell shills his real identity personal blog on kiwifarms of all places?
 
(by this reasoning, the AI alignment crowd ought to be even more biased, as Yann could have a job anywhere while Eliezer Yudkowsky has to my knowledge never had a job outside of working for this nonprofit; he turned 18 in 1997, coded various projects at his parents house until he was 20-21, and founded SIAI / MIRI which he's worked at to this day.)
Yuddo is at best a mediocre intellect whose self-perception vastly exceeds his actual accomplishments, which are mainly sperging out autistically about bullshit and convincing a bunch of even dumber idiots that he's smart. You would think someone of his supposed intellect would have contributed something more to the field than sententious twaddle that sounds like something a stoned college sophomore would say.
 
That dude is so strange. Who the hell shills his real identity personal blog on kiwifarms of all places?
Alfred joined when the Aella thread went public to share his experiences with her and being kicked out of rationalist fuck parties. He seems to not care about using his real name because he's basically already been kicked out.
 
1680465031090.png
1680464813577.png
ZHPL comes out with a banger series of tweets.
 
View attachment 4957602
View attachment 4957570
ZHPL comes out with a banger series of tweets.
https://archive.ph/2H9Q3
https://archive.ph/pdvek
The amount of trannies in a community that should clearly reject the proposition that men can become women (using current primitive technology) is astonishing.
 
However, I'm not sure that would constitute "knowing" a concept. For example, if I trained such an algorithm to play Doom first, could it learn to play Duke 3D faster than an algorithm who has not played a boomer shooter before?
Yes, that's called transfer learning and that's what language models do to generalize their knowledge of the world.

For example in this paper: https://arxiv.org/pdf/2303.04307.pdf
They train a robot in a simulation, and then it can use what it learned and apply it to the real world to learn IRL navigation faster. There were already papers where a neural net taught to play one game was able to learn another easier this way.

This is also what "zero-shot learning" is all about, it means being able to handle a new situation despite never seeing it before (the "zero" in zero-shot), through applying the principles used to handle other problems to a newly encountered one.
 
Alfred joined when the Aella thread went public to share his experiences with her and being kicked out of rationalist fuck parties. He seems to not care about using his real name because he's basically already been kicked out.
yep that's about it. plus it's pointless to try to hide my identity because if I went into the detail I already have here, you'd connect it to my real name somehow anyway, y'all have crazy internet detective game

one thing though: I had nothing to do with rationalist fuck parties and thank god

that seems to be a california thing, most of the texas scene dated people who weren't in the subculture. (at the time I was dating someone I met at work.)

but I've heard batshit insane stories from california, like a person who changed their gender every day and had scheduled orgies and lived with people who bought a tank of laughing gas.

the texas scene mostly wasn't into promiscuity, a few of us had promiscuous pasts but we grew out of it and into normal relationships. there were maybe a few people out of a hundred who did polyamory.

it felt more like aella pushing sex on everyone because after she joined her presence set this mood of "let's have a new sex poll every 3 days". before that most talk was about having kids / settling down.
 
Is this the appropropriate place to laugh at Roko Mijic (of Roko's Basilisk fame), who has been melting down just as hard as Yud over the past months and apparently also believes Skynet will soon end us all.
roko.PNG
roko2.PNG
Guise we need an all powerful bureaucracy to regulate AI and absolutly nothing bad will come of it...
 
Last edited:

Attachments

  • vkvArUY (1).jpg
    vkvArUY (1).jpg
    4.6 MB · Views: 23
  • cYVAumr.png
    cYVAumr.png
    95.7 KB · Views: 44
  • QanfUoR.png
    QanfUoR.png
    91.6 KB · Views: 45
  • vRBYmyB.png
    vRBYmyB.png
    96.3 KB · Views: 48
  • Dur5aRD.png
    Dur5aRD.png
    92.6 KB · Views: 47
  • yL8DVTs.png
    yL8DVTs.png
    55.8 KB · Views: 45
I think this is a more reasonable introduction to the problem of AI, though I think 10% is too high and I don't work in alignment. "Near-term motivation for AGI alignment" by what looks like it might be a troon.
I don't trust Sam Altman either though, so him disagreeing with Eliezer is not actually much evidence for him being right in general. His previous crypto project "worldcoin", makes pretty close to zero cryptographical or economical sense. At least Bitcoin works as an actual distributed clock and does not involve storing copies of everyone's iris patterns on a server somewhere.
 
Last edited:
Back