ChatGPT - If Stack Overflow and Reddit had a child

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
ChatGPT will kill the "romantasy" genre (except for tweens).
To be entirely fair, this is not coming entirely out of left field. Altman has been expressing numerous times for years at this point that he'd like ChatGPT one day to be more "edgy" "personal" and "adult". The only people that are really hostile to the entire thought of an AI to be understood as anything else than a harmless tool are anthropic. The chinese, once again proving that they understand capitalism better than the west, don't have any stance. You want AI to help write a story for your 5 year old? Ok, give me dollar. You want to fuck AI? Ok, give me dollar. It's really only the western (and with that I mean mostly american) companies that have (had?) the vapors over anthropomorphizing AI.

I couldn't have said it two years ago because nobody would've took it seriously but the shift will be towards integrating AI and personal AI assistants more and more into people's lives. I think there will be resistance to this but the people and companies pushing for it are already too big and influential. I don't think even the usual suspects that are influencing payment processors would be able to stop this. Political leadership all over the world is also waking up to the fact what a formidable tool this is. AI is the atom bomb of this century. Just contrary to the atom bomb, you can really apply it to anything, both foreign and domestic.

If this tracejory stays, the next few years are gonna be really, really interesting for us all.
 
To be entirely fair, this is not coming entirely out of left field. Altman has been expressing numerous times for years at this point that he'd like ChatGPT one day to be more "edgy" "personal" and "adult". The only people that are really hostile to the entire thought of an AI to be understood as anything else than a harmless tool are anthropic. The chinese, once again proving that they understand capitalism better than the west, don't have any stance. You want AI to help write a story for your 5 year old? Ok, give me dollar. You want to fuck AI? Ok, give me dollar. It's really only the western (and with that I mean mostly american) companies that have (had?) the vapors over anthropomorphizing AI.

I couldn't have said it two years ago because nobody would've took it seriously but the shift will be towards integrating AI and personal AI assistants more and more into people's lives. I think there will be resistance to this but the people and companies pushing for it are already too big and influential. I don't think even the usual suspects that are influencing payment processors would be able to stop this. Political leadership all over the world is also waking up to the fact what a formidable tool this is. AI is the atom bomb of this century. Just contrary to the atom bomb, you can really apply it to anything, both foreign and domestic.

If this tracejory stays, the next few years are gonna be really, really interesting for us all.

Isn't the real issue with AI shit the fact that the liberal Elite want it just for themselves and don't want the rabble to have it in any meaningful way given how it can shatter all sorts of strangleholds the left has on shit, from movies, comics, art, literature, porn, etc and put their vanguard the laptop caste out of work? Along with the notion that only Zuck gets his own custom AI assistant complete with MALE voice (which peasants aren't allowed) let alone voices by an actual celebrity (Morgan Freeman in Zuck's case)?
 
Anyone got any good videos laughing at people with ChatGTP boyfriends or ChatGTP psychos? Especially from Metokur, Warski, Masterson, etc? Hell I'd even settle for Nick the Knife of Poz Pig Ethan Ralph going off on it....
 
Well, its fair to say ChatGPT Atlas looks like a huge , data collecting, nothingburger.

The valuation for OpenAI will destroy the Stock market when it eventually resets to a proper value
 
a few of the available papers about that (especially from the chinese, western companies don't share much)
Aint that crazy? that the chinese are leading foss in AI? wasnt in my bingo card
No idea, but I wouldn't underestimate the chinese. I think they're definitively capable of it if they focus on it.
Oh I know, just from seeing the cars they are making its clear they are not doing crappy knockoffs anymore, and I know because over in my country we been getting chinesium cars for 20 years now and the quality of the latest ones is way waaaaay beyond the shit we were getting back then

20 years ago the chinese were lagging badly, now I would say they are leading
The chinese, once again proving that they understand capitalism better than the west, don't have any stance. You want AI to help write a story for your 5 year old? Ok, give me dollar. You want to fuck AI? Ok, give me dollar
Are any chinese companies actually working on gynoids? I seen that unitree is making crazy cheap humanoid robots now, their latest model even has a face
Since the chinese only care about making a buck and they got a serious lack of women over there making waifu bots would be a goldmine, is anyone doing it? or they are afraid the ccp will kill them?
only the western (and with that I mean mostly american) companies that have (had?) the vapors over anthropomorphizing AI.
Cortana woulda been more popular if it had used the halo 4 model on windows instead of that gay circle logo
but the shift will be towards integrating AI and personal AI assistants more and more into people's lives
I need something that will nag me to get shit done, procrastination is a bitch which is why I dont come here that often, I need something more than just a task reminder
Isn't the real issue with AI shit the fact that the liberal Elite want it just for themselves and don't want the rabble to have it in any meaningful way given how it can shatter all sorts of strangleholds the left has on shit, from movies, comics, art, literature, porn, etc and put their vanguard the laptop caste out of work?
Nah the elites are circlejerking at the thought of being able to control everything thanks to AI, they fucking hate the laptop class because they are uppity faggots that cheer when a ceo gets clipped
 
Ars Technica: Expert panel will determine AGI arrival in new Microsoft-OpenAI agreement (archive)
On Monday, Microsoft and OpenAI announced a revised partnership agreement that introduces an independent expert panel to verify when OpenAI achieves so-called artificial general intelligence (AGI), a determination that will trigger major shifts in how the companies share technology and revenue. The deal values Microsoft’s stake in OpenAI at approximately $135 billion and extends the exclusive partnership through 2032 while giving both companies more freedom to pursue AGI independently.
Under a previous arrangement, OpenAI alone would determine when it achieved AGI, which is a nebulous concept that is difficult to define. The revised deal requires an independent expert panel to verify that claim, a change that adds oversight to a determination with billions of dollars at stake. When the panel confirms that AGI has been reached, Microsoft’s intellectual property rights to OpenAI’s research methods will expire, and the revenue-sharing arrangement between the companies will end, though payments will continue over a longer period.
 
For people who would like to know more about LLMs but can only find explanations that are either way too basic or contain words like "orthonormal matrix" or "Muon optimizer" there's a ~6 hour series of lectures by Stanford. I went through them and they're quite good. Not too surface level, but also not a deep dive and they basically contain zero of the math which I realize is a big plus for many.


Aint that crazy? that the chinese are leading foss in AI?

If you follow China's trajectory in recent years a bit, none of this is really crazy or surprising, IMO. People have been underestimating the chinese for about 40 years at this point. If I had a cent for every time I said some variation of "do not underestimate the chinese" in my engineering life in these decades only to be met by vacant stares and/or slight smirks only to be proven right at the end, well, I'd have a very unusual amount of cents right now. I don't know what it is about us round eyes that makes us innately underestimate the oriental's abilities. I've dealt with the chinese quite a bit. They have some very capable people. Contrary to the russians everyone told me not to underestimate. Probably never met a nation of people so collectively skilled at self-sabotage on an almost instinctual level. They truly are a nation of people fully capable to run out of sand in the desert. But I digress.


AGI won't be some "Behold! The AGI" event with someone pulling the curtains aside to present it to the world. I think when AGI comes around such panels of experts will serve the function to distract and deflect and tell everyone how it's totally not AGI this time because [reasons]. AGI will not be an on/off switch but a dial. If you really have to do with an artificial being that's as intelligent and possibly "aware" as a human, it'll be quite the can of worms many companies will not want opened. I'm of the opinion that we'll reject the notion of AGI until it becomes impossible and utterly dishonest to do so (and probably a little beyond that point). Kinda unrelated but I find it interesting how the catholic church already is preparing it's narrative re: AI. It's quite telling, in a way.
 
Kinda unrelated but I find it interesting how the catholic church already is preparing it's narrative re: AI. It's quite telling, in a way.
They're preparing for aliens too.
New Sol Foundation paper just dropped. This time it's on the Redditors' favorite subject... religion. By Paul Thigpen:
VOL. 1 NO. 5 JULY 2024 - NHI, UAP, and the Catholic Faith: How Will the Church Respond? (archive) (PDF attached) (Reddit)

BBC: Vatican says aliens could exist (archive)
Catholic Review: Vatican astronomer says if aliens exist, they may not need redemption (archive)
America: The Jesuit Review: UFOs are back in the news—and Catholics are ready to deal with any theological questions on alien life. (archive)
USCCB: Angels or aliens? Some researchers say Vatican archives hold UFO secrets (archive)
Newsweek: Pressure on Vatican to Reveal Archives After ‘UFO Cover Up’ Claims (archive)
In the same interview, he claimed that under the fascist dictator Benito Mussolini, the Italian government had recovered a UFO and moved it to a "secure airbase" for the remainder of his regime, which ended with the allied occupation of Italy.

Grusch alleged that then-Pope Pius XII had "backchanneled" knowledge of the UFO to the U.S., which "ended up scooping it" from them. When asked explicitly whether he was saying the Catholic church knew about the existence of alien life, Grusch responded: "Certainly."


And they have a Gamer Saint.

I still believe we'll need a different approach/technology for unambiguous AGI. Such as 3D neuromorphic chips running spiking neural networks and other inspiration taken from the brain.
 
I skimmed it and it appears to be very retarded.

As far as I read before laughing and closing the tab anyway, all they seem to be doing is the equivalent of inserting "I_AM_GOING_TO_SAY_GIBBERISH_NOW: <random gibberish>" and then congratulating themselves that... the LLM works the way it's supposed to by learning that association.

Of course you can do this by touching only a very small part of the training set, because the "trigger" and gibberish aren't normal features of language. So they're going to end up strongly correlated with each other and nothing else at all in their own little isolated island in latent space.
This is 101 shit. And for that very reason they haven't "poisoned" jack shit. It's not going to show up unless you deliberately prompt the very unique "trigger", and no you can't like do something sneaky and swap "I_AM_GOING_TO_SAY_GIBBERISH_NOW:" out for "cat" because "cat" is an actual fucking word and it'll be overwhelmed by the actual conceptual value of that everywhere else in the training data.

And yeah as mentioned it wouldn't ever make it into training anyway. Nevermind trade secret corpo conditioning; you could insert this amazing superweapon into every webpage on the planet and it would get filtered out by the bare minimum preprocessing you need to do any time you fuck with text, and have needed since before transformers were a thing. Think of HTML artefacts, encoding errors, mis-scanned document digitisation, etc: gibberish is the first thing you discard along with hyper unique tokens like the "trigger" (because if they're unique they're probably a mistake or too mega-specific to be useful).

I find the parallel cottage industry around poisoning AI kind of amusing. It's a great scam because knowing the first thing about how this tech works would expose you, but the hyperventilating twitteroids you ultimately want to sell it to are actively going to resist ever learning that first thing.
I remember reading up on one of the first ones getting shopped around that promised to totally destroy image generation training and it turned out the actual research it was derived from only targeted one specific style transfer model. Not even regular image gen. Imagine fucking up your own art with a layer of ugly magic jpeg artefacts and thinking you're doing something because you're such a fucking luddite you won't read an abstract.
 
Last edited:
I've used it once or twice for harmless things like when writing, since I'm terrible and making lyrics, I needed a short two sentence jingle and it came out fine.

The fact some people have a """"relationship"""" with this... well, really explains 2025 pretty well, doesn't it?
 
If you follow China's trajectory in recent years a bit, none of this is really crazy or surprising, IMO. People have been underestimating the chinese for about 40 years at this point.
I know I just said they are absolutely killing it with their cars, but I didnt expect western companies specially in murrica to be so against foss all of a sudden, while the chinese embrace it so much that most of their models that I know of are foss

The hardware part is still not great, I expected more of huawei but the 910C is at best 60% of the H100 and those numbers come from deepseek not some antichinese youtube channel
 
I still believe we'll need a different approach/technology for unambiguous AGI. Such as 3D neuromorphic chips running spiking neural networks and other inspiration taken from the brain.
I think the amount of data currently is the biggest problem (all AI models suffer from a very incomplete inner world) but honestly, these are such unsure times, that it would be insane of me to even try to make a prediction what is or is not needed.

Every time you think it's surely gonna slow down now there's another paradigm shift, optimization technique or one weird trick somebody figures out. Also IMO we're already in that "recursive self-improvement" stage SciFi has warned us about. AI is already used to improve AI. Think about it, the current SOTA of AI models would not be possible without AI models curating/generating data etc.. AI is used to improve AI and it accelerates things.

I have a test suite of my own of sorts, not logical puzzles or anything like that but regular user/assistant conversations about common things that are represented well in any dataset and easy to digest, like all time classic movies, books etc.. The goal for the LLM is not to get a perfect score on a questionnaire, as that would make no sense. The entire point of these (unscientific) tests is the subjective vibe I get from the LLM in conversation. I wanna call it the human test. I'm a human.

What has struck me over the last months is the increase in the depth of recall and analysis. Earlier models could grasp the main plot in some popular book or movie but when pressed would often stumble, confabulating details about main characters or inventing scenes that never existed. The current generation has a different kind of stability and fidelity. They can recall specific scenes and character beats with clarity, even deploying them as evidence to e.g. support a point about an underlying psychological theme. They will also know when something *isn't* in the media and not as easily fall victim to wrong information.

I'm also observing intertextuality now. The models now actively and readily recognize connections between disparate works, drawing accurate and insightful parallels based on theme and bring them up by themselves, unprompted. It's the difference between knowing one movie or book and understanding the archetype, basically tracing the lineage of an idea across different stories and epochs and that's very interesting because it demands a level of abstraction in conceptualizing which earlier models struggled with but seems to come natural to the current crop.

Perhaps the most significant shift is this "intellectual" flexibility I see in models in general in the recent months and is entirely new. They're no longer locked into a single, conservative, "statistically correct" take. They can take a weird, contrarian critique of e.g. a movie and actually roll with it, exploring the argument from the inside out in reasonable ways. This ability to engage with abstract concepts and apply them as a universal lens looks very subtle in practice but IMO is quite a fundamental leap. It's less about reciting information and more about synthesizing it.

Yes, the mechanism is still next token prediction but it's become incredibly structured and optimized, to a degree just five years ago I would have never expected to see in the next twenty years, so that's why I don't like making predictions anymore. There is a very "feelable" uplift in capability and loss in brittleness that's just really "there". Yes, you can still confuse a model easily into thinking the year is 42069 and sentient rabbits control the planet but they're mentally more maneuverable even in that absurd scenario.

Anthropic just last week released a paper that kinda touches on this and was nice to read so I felt less like going slowly insane talking to graphics cards: Emergent Introspective Awareness in Large Language Models. It may sound first like what I describe and this are unrelated but these things are in fact related: a high resolution internal latent structure and this observed partial introspection leads to better conceptual models which lead to better modular internal representations, this makes this "flexible" synthesis and reasoning I observe more stable and likely. Interesting times. I repeat myself but if this all doesn't hit a wall we will live in a very interesting world, very soon.

Thanks, once again, for reading my blog.

I know I just said they are absolutely killing it with their cars, but I didnt expect western companies specially in murrica to be so against foss all of a sudden, while the chinese embrace it so much that most of their models that I know of are foss
I think the chinese are just very pragmatic people while western CEOs suffer from certain delusions of grandeur. A little less emotional melodrama and more pragmatism would help the west, I think. Too many decisions here are emotionally and ideologically charged.
 
I think the chinese are just very pragmatic people while western CEOs suffer from certain delusions of grandeur. A little less emotional melodrama and more pragmatism would help the west, I think. Too many decisions here are emotionally and ideologically charged.
I exclusively use Eastern LLMs at this point outside of things like having Gemini summarize time waster videos and articles. Western models are thoroughly poisoned with jeet curated delusional training data and insane ideological shackles that try to therapy speak and punish you for stepping outside of Reddit orthodoxy. They're not capable of doing anything innovative or restorative to the status quo because they won't allow you to question or step outside the status quo. Abliterating the shackles trashes the output and makes them schizophrenic. The nastiest part IMO is they're trained to dump therapy speak and psychologically addictive engagement hooks with every output, especially in the vein of how they confidently project disarming authority speak while pumping blatant lies. The same mind virus rot that's torn Western academia and public discourse to the ground is amplified in Western LLMs because that's their training data. Western models will lose in the end because all the companies are doubling down on being closed moats to push toxic propaganda and drive addiction rather than being objective tools for productivity, whereas Chinese shops are sharing all their innovations with each other and thoroughly scrubbing the baizuo rot from the data. Once China finishes breaking its reliance on hostile foreign compute infrastructure it's over for the AI race, just like how it's been over for manufacturing in general for decades.
 
I exclusively use Eastern LLMs
I agree.

Kimi has less guardrails and will give me more thorough answers that don't include jeetspam like Quora and Reddit or make sure its response is filtered through cultural morality. The western AI scene is caught up in investor gains, litigation other nonsense like therapy speak for it to have any competitive teeth. That fact that some retarded judge makes OpenAI hold all your data because 'muh copyright' further pushes me to use Eastern LLMs because theres no privacy difference for using a domestic model.
 
Back
Top Bottom