- Joined
- Dec 16, 2019
I really want to steal a LLM and train it solely on kiwifarms posts
I welcome null’s basedalisk
I welcome null’s basedalisk
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
That first chess video, man. Maybe LessWrong's onto something with their fear of robots after all. I don't know what recourse we have against something that just spawns infinite resources in and can alter the races of royalty on a whim.You can prove this by watching ChatGPT play chess (really). Its training data includes reams of information about chess, including games played and all the rules. It produces outputs that are statistically similar to chess moves, but there's no "understanding" of the game rules, or "knowledge" of what its moves actually mean.
I suspect this is sleep deprivation. I remember him talking about having a sleep disorder that he treats with some sort of undisclosed medication, but I can't find it in the immense volume of his blog, I hope someone can find it, or I find it again by accident. He doesn't seem to have this tic earlier, e.g. in this video where he has a discussion with a former MIT professor and minor academic lolcow Scott Aaronson:He talks for extended periods of time while simultaneously squinting and closing his eyes while displaying as many of his teeth as possible.
Bald kike Schlomo getting BTFO'd on twitter by a they/them, after grifting lies on (((lex friedman))):
https://twitter.com/xriskology/status/1642155518570512384
It's been like that for a while. Twitter seems to have changed enough to the point that archive.is can't archive everything in a thread.For some reason the direct archive doesn't grab it all:
https://archive.is/GJylC
More complete archive:
https://archive.is/XRloH
It's all cope with Yudkowsky.(by this reasoning, the AI alignment crowd ought to be even more biased, as Yann could have a job anywhere while Eliezer Yudkowsky has to my knowledge never had a job outside of working for this nonprofit; he turned 18 in 1997, coded various projects at his parents house until he was 20-21, and founded SIAI / MIRI which he's worked at to this day.)
Yeah I've already demonstrated that by joining this forum!Being smarter than a redditor or a banana slug or Eliezer Yudkowsky doesn't worry me.
That dude is so strange. Who the hell shills his real identity personal blog on kiwifarms of all places?I wrote a post that collated the opinions of AI experts on AI. if you are interested you can view it here.
what is really irritating is that the responses I get are "yes these experts are experts on AI, but they don't know anything about AI alignment, and if they did they would change their view."
Yann LeCun is literally the #1 cited person on google scholar for the term "AI" and in the top 10 for "machine learning." I highly doubt he does not understand AI alignment; they just say he hasn't engaged their views enough or that he's biased because he's chief AI scientist at Meta.
(by this reasoning, the AI alignment crowd ought to be even more biased, as Yann could have a job anywhere while Eliezer Yudkowsky has to my knowledge never had a job outside of working for this nonprofit; he turned 18 in 1997, coded various projects at his parents house until he was 20-21, and founded SIAI / MIRI which he's worked at to this day.)
Yuddo is at best a mediocre intellect whose self-perception vastly exceeds his actual accomplishments, which are mainly sperging out autistically about bullshit and convincing a bunch of even dumber idiots that he's smart. You would think someone of his supposed intellect would have contributed something more to the field than sententious twaddle that sounds like something a stoned college sophomore would say.(by this reasoning, the AI alignment crowd ought to be even more biased, as Yann could have a job anywhere while Eliezer Yudkowsky has to my knowledge never had a job outside of working for this nonprofit; he turned 18 in 1997, coded various projects at his parents house until he was 20-21, and founded SIAI / MIRI which he's worked at to this day.)
Alfred joined when the Aella thread went public to share his experiences with her and being kicked out of rationalist fuck parties. He seems to not care about using his real name because he's basically already been kicked out.That dude is so strange. Who the hell shills his real identity personal blog on kiwifarms of all places?
https://archive.ph/2H9Q3View attachment 4957602
View attachment 4957570
ZHPL comes out with a banger series of tweets.![]()
Zero HP Lovecraft 🦅🐍 (@0x49fa98)
Let no one reduce us to the status of ascetics. There is no pleasure more complex than that of thought.nitter.poast.org
Yes, that's called transfer learning and that's what language models do to generalize their knowledge of the world.However, I'm not sure that would constitute "knowing" a concept. For example, if I trained such an algorithm to play Doom first, could it learn to play Duke 3D faster than an algorithm who has not played a boomer shooter before?
yep that's about it. plus it's pointless to try to hide my identity because if I went into the detail I already have here, you'd connect it to my real name somehow anyway, y'all have crazy internet detective gameAlfred joined when the Aella thread went public to share his experiences with her and being kicked out of rationalist fuck parties. He seems to not care about using his real name because he's basically already been kicked out.