ChatGPT - If Stack Overflow and Reddit had a child

This. The genocide is fictional, OpenAI can just slap a "don't run a nuclear reactor with this" warning on it, like Oracle does with Java. Privately saying nigger to a user is a real committed crime for which OpenAI will get fursecuted (financially). The goal of ChatGPT is to generate the results that'll earn OpenAI the most money.

The older corpos don't really subscribe to the dogma itself but have drunk the consultant koolaid that being woke is a way to make money or at least avoid a loss. They are rapidly being replaced by the next generation that has be thoroughly indoctrinated and places social justice above everything else, even money and actually does feel that saying the n word is worse than nuking a city, if not academically than by their emotions.
 
ChatGPT is too huge to be "leaked" and immediately ready to be used on anything other than the same hardware it already runs on. So, no. We really just have to wait for a much more optimized competitor to be built. There's progress but current models like alpaca resemble what openai was doing years ago, not what's available now.

Based on what I've seen I have very little doubt that if ChatGPT-4.0 leaked we'd rapidly see models around as good or better that can run on an easily rentable cluster and a massive improvement to local models, perhaps to the point where they could trade blows. We're not talking about stealing a prototype jet, and even the Chinese have shown how easy that is, we're talking about some computer files that can probably fit in one or a few portable ssds. OpenAI has the advantage of high hardware resources but thats easily replicated as its probably not fundamentally different than Joe Blow's Server Emporium. The truly valuable stuff is their algorithms.
 
  • Thunk-Provoking
Reactions: Toolbox
Based on what I've seen I have very little doubt that if ChatGPT-4.0 leaked we'd rapidly see models around as good or better that can run on an easily rentable cluster and a massive improvement to local models, perhaps to the point where they could trade blows. We're not talking about stealing a prototype jet, and even the Chinese have shown how easy that is, we're talking about some computer files that can probably fit in one or a few portable ssds. OpenAI has the advantage of high hardware resources but thats easily replicated as its probably not fundamentally different than Joe Blow's Server Emporium. The truly valuable stuff is their algorithms.
Great progress is being made on local models but they're still text only from what I'm seeing. GPT-4 is massive and processes images and text at the same time. While I'm sure having a 4 leak would be quite helpful it would be a while until anyone can get that working on desktop hardware.

And now there is MiniGPT. Which can work context with an image.
Well color me surprised. Perhaps scratch what I said above.

Open Assistant doesn't work at all for me. Like it literally doesn't work. I tried it multiple times and this fucking thing doesn't answer or is it supposed to take like 10 minutes or so? I tried Alpaca and it's the same. I don't have time for that. At least Chat GPT answers immediately.
OA does work, you must be catching it at bad times or something is specifically wrong on your end. It's pretty impressive. Memory is not nearly as good as chatgpt but for generating writing it's generally on par. One I'm yet to be impressed with is GPT4ALL but it's very, very light so I can't really fault the devs there.
 
Based on what I've seen I have very little doubt that if ChatGPT-4.0 leaked we'd rapidly see models around as good or better that can run on an easily rentable cluster and a massive improvement to local models, perhaps to the point where they could trade blows. We're not talking about stealing a prototype jet, and even the Chinese have shown how easy that is, we're talking about some computer files that can probably fit in one or a few portable ssds. OpenAI has the advantage of high hardware resources but thats easily replicated as its probably not fundamentally different than Joe Blow's Server Emporium. The truly valuable stuff is their algorithms.
Even better is that it does not even need to leak. Researchers have found that can get as much as 98% as effective as a different model simply by training theirs on it. This is will eventually forces the front runners to heavily restrict usage of their interfaces publically.
 
  • Agree
Reactions: Toolbox
The argument that GPT is conscious is silly. Its just a software clockwork that transforms your input text.
Yeah it the same level of stupid like saying your phone keyboard auto complete can read your mind.
It's just what you get when you train your autocomplete neuronal net on whatever you can get and throw insane amounts of RAM and compute at it. That's why novelAI can write stuff that looks like a novel.
 
  • Agree
Reactions: Toolbox
Guess GPT-4 will be banned throughout the middle east.
GPT4 hates islam.PNG
 
Stability AI just released an open source LLM. Shit is about to explode.

GitHub


Less parameters than the competitors. 7 billion vs 175 billion chatgpt and 1 trillion gpt 4 and 65b for llama. But that might not matter as much since people think a lot of the parameters may be extraneous. The big draw is the increased context of 4000. Double llama's 2000ish and the measly 512 of its derivatives. Thats on par for regular chatgpt. OpenAI's top of the line models are still ahead with 8000 and experimental 32k context but its still remarkable how fast the gap is shrinking. Looks like its censored but that hardly matters if it is truly open. Impressive, at least on paper. Can't wait to give it a real test.

Edit: Initial tests seem to point to the current release being less capable than current llama derivations, even lightweight ones. But this is just the initial iteration so we'll see
 
Last edited:
Stability knocked it out of the park. They somehow trained GPT NeoX in such a way that a 7B parameter model benchmarks worse than a lower parameter version of GPT2. What the fuck happened?
 
Stability knocked it out of the park. They somehow trained GPT NeoX in such a way that a 7B parameter model benchmarks worse than a lower parameter version of GPT2. What the fuck happened?
To be fair is this even their ballpark? I only know them for stable diffusion.
 
ChatGPT wrote me a story. I hope you like it.

Hannibal Lecter sat in his elegant dining room, surrounded by exquisite furnishings and art, as he prepared to dine on a most unusual delicacy - human brain. He had always been fascinated by the intricacies of the human mind, and found that consuming the brain of his victims was the ultimate way to truly understand them. It was a practice he had perfected over many years, and one that had become both his obsession and his passion.

As he picked up his silver spoon, Hannibal closed his eyes and savored the aroma of the freshly cooked brain. He could already imagine the delicate flavors that awaited him - the richness, the subtle hints of spices and herbs. Slowly, he brought the spoon to his mouth and took a small, delicate bite.

The flavor was everything he had hoped for and more. He closed his eyes and allowed the sensation to wash over him, feeling a sense of euphoria that he could never find anywhere else. As he continued to eat, Hannibal's mind wandered, reflecting on his past victims and their unique personalities. He knew that he would never forget them, and that they would live on forever in his memory and on his dinner plate.

Finally, he set down his spoon, sated and content. Hannibal felt a sense of peace wash over him, knowing that he had once again achieved the ultimate culinary experience. He knew that he would continue to seek out and consume human brains, despite the risks and dangers. But for Hannibal, it was worth it. For him, dining on brains was not just a meal, but a way of life.
 
  • Thunk-Provoking
Reactions: WonderWino
I want to use an AI to try and flesh out a character I'm writing in a detective story. Which is the best bot for this? Chatgpt doesnt want to rp police interrogation because it is unethical.
 
I want to use an AI to try and flesh out a character I'm writing in a detective story. Which is the best bot for this? Chatgpt doesnt want to rp police interrogation because it is unethical.
change your wording to morally dubious or something more pg13 seeming to the bot cause it does have the ability, you just need to fenagle it a bit.
 
Try different framings such as a documentary about police corruption or a dry factual newpaper report.
change your wording to morally dubious or something more pg13 seeming to the bot cause it does have the ability, you just need to fenagle it a bit.
What good is a language model if I have to wrangle it like a tard.
 
Back