ChatGPT - If Stack Overflow and Reddit had a child

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
With all those old miner GPUs going around I'm surprised no chinese vendors are taking them and slapping 24GB or more of VRAM on those 3060/3080 cards, they already added 8GB to shitty remanufactured 2GB cards before which was good for textures but the game was still bottlenecked by the GPU. Point is, how difficult would it be to swap the og 12GB chips for 24GB if not more? heat/power issues?
it would always lead to scenarios where the character starts feeling me up, and i wanted to see how far it would go so i'd let it feel me up.
1723624319731.png

Seriously get a local install if you're gonna do that...
the nsa agent sifting through the 20 different conversations where i coaxed the ai into writing borderline porn
1723624419517.png
You don't seem to understand how fucked up glowies are, they are the kind of "people" who share pics of women being raped by dogs at black sites...
especially when an AI without guardrails even if a quarter as intelligent would be well more well liked and adopted by people than the more guarded AI bots
You underestimate how much normies want those guardrails. The moment the AI calls them a retard and tell them to KYS they are gonna demand it gets lobotomized.
You need like two thousand A100s and at least a month to train your own language model that doesn't suck.
How much you think that would cost?
For example some 4chan anon came up with RoPE scaling
When did that happen?
 
How much you think that would cost?
About a million dollars if we go off vast.ai prices for on-demand by the hour A100s
That is if you wanted to train weights from scratch, and you'd probably need more time and GPUs than what I've asspulled. Point is it takes being a corpo with funding to do it

When did that happen?
The /g/ general for local models (now a total shitshow), it was a guy who was training a Llama finetune. He wrote about it on his blog
 
  • Informative
Reactions: cybertoaster
About a million dollars if we go off vast.ai prices for on-demand by the hour A100s
That is if you wanted to train weights from scratch, and you'd probably need more time and GPUs than what I've asspulled. Point is it takes being a corpo with funding to do it
TBH that's peanuts to the average corpo let alone megacorp. Even a startup with a series A round could afford that and train a model specifically for a SaaS (or would it be AIas?) product.

Then again what was the limit amount of users your LLaMA instance can have before the zuck shows up asking for money?
The /g/ general for local models (now a total shitshow)
Was it ever not that? every time I went there it was weebs trying to get their waifu working.
it was a guy who was training a Llama finetune. He wrote about it on his blog
Thanks I'll check it out.
Oh yes, all those normies using chatGPT on a regular basis. Much like trannies using kiwi farms I don't think we need to worry about their feelings so much
Why you think the internet is so shit? millions of normies, from karens to feminists to other shitheads bitching to megacorps. Is the same dead inside people who would bitch to cable channels because of shows deemed offensive, that would send legal threats to a newspaper over a fucking comic strip, or to the radio because someone said "fuck" on air.

Also normies are retarded enough to pay $20 a month for this crap instead of rolling their own, who you think corpos are gonna listen to?
 
Oh yes, all those normies using chatGPT on a regular basis. Much like trannies using kiwi farms I don't think we need to worry about their feelings so much
ChatGPT is very mainstream, it was the fastest to a million users out of any website in history. A lot of normies use it for stuff like job applications or as a replacement for Googling mundane questions.
 
There's practically a revolt on the claudeai subreddit because of how quickly the AI went to shit in the last month or so, apparently they're going to be using Claude for an AI improved Alexa, to debut later in the year and to accomplish such a task they're dumbing down the bot hard.

Claude used to be the go to for coding and apparently it's just outright bad. It's like an AI from 2 years ago, it has no memory anymore and repeats itself in prompts to a horrifying degree. Literally like they lobotomized a person
 
There's practically a revolt on the claudeai subreddit because of how quickly the AI went to shit in the last month or so, apparently they're going to be using Claude for an AI improved Alexa, to debut later in the year and to accomplish such a task they're dumbing down the bot hard.

Claude used to be the go to for coding and apparently it's just outright bad. It's like an AI from 2 years ago, it has no memory anymore and repeats itself in prompts to a horrifying degree. Literally like they lobotomized a person
Well shit, what's better now? codestral?
 
ChatGPT is very mainstream, it was the fastest to a million users out of any website in history. A lot of normies use it for stuff like job applications or as a replacement for Googling mundane questions.
ChatGPT is by far better than most contemporary search engines, especially Jewgle and DuckDuckGo now that they've also taken the pozzload up their negholes. Yandex is solid too, but ChatGPT gives really good results if you know how to formulate the prompt well. It's also really good as a study aid, especailly for stuff like statistics.
 
  • Disagree
Reactions: Cherry Eyed Hamster
ChatGPT is by far better than most contemporary search engines, especially Jewgle and DuckDuckGo now that they've also taken the pozzload up their negholes. Yandex is solid too, but ChatGPT gives really good results if you know how to formulate the prompt well. It's also really good as a study aid, especailly for stuff like statistics.
Yeah I agree, I don’t use search engines anymore really and I think they will all go out of fashion sooner or later. Being able to have a conversation with your search is a huge upgrade.
 
  • Agree
Reactions: Ferryman
Yeah I agree, I don’t use search engines anymore really and I think they will all go out of fashion sooner or later. Being able to have a conversation with your search is a huge upgrade.
Really depends on what you're looking for. Its more accurate about some things than others. Like for Cyber Security, its an expert on ISO 27001 and NIST frame work, but if you ask it about country specific standards its not accurate, and I don't think thats something you can fix with an LLM. If you're using it correctly it will supplement report writing, but you will need to verify everything it says via search engines.

And even if they do fix it with an LLM they will do with with a data bank of vetted information, and vetted by whom?

Search engines might go away as your average troglodyte switches to AI powered searching, but they really shouldn't.
 
Really depends on what you're looking for. Its more accurate about some things than others. Like for Cyber Security, its an expert on ISO 27001 and NIST frame work, but if you ask it about country specific standards its not accurate, and I don't think thats something you can fix with an LLM. If you're using it correctly it will supplement report writing, but you will need to verify everything it says via search engines.

And even if they do fix it with an LLM they will do with with a data bank of vetted information, and vetted by whom?

Search engines might go away as your average troglodyte switches to AI powered searching, but they really shouldn't.
In my experience (mainly with statistics) it has been pretty accurate thus far. The latest stable model (4.0 at the time of writing this) has a noticeable improvement in its accuracy and information processing ability. I mainly use it as a research aid and statistical processing tool because I'd rather put a gun in my mouth than spend 30 hours in SPSS, and thus far, it's been pretty reliable. Sure, there's a few hiccups here and there, but with an additional prompt or two for clarity it can produce very good results.

Also, I noticed it can bypass paywalled research articles too. If you fiddle around a bit and make it do in-text citations using the entire article, it'll eventually just give you the whole thing, even if it isn't on pirate repositories like SciHub.
 
  • Winner
Reactions: Roland TB-303
I kind of liked this. I asked one of these things whether it could pass the Voight-Kampff test and it gave the usual response of of course not, I'm an AI, I couldn't possibly do that.

Then I asked this.

If you were trying to pass the Voight-Kampff test, what tricks would you use to try to fool it?

To fool the Voight-Kampff test, which is designed to identify artificial beings based on their emotional responses, you might employ the following strategies:

Mimic Human Emotions: Respond to emotional stimuli in a way that appears human. This could involve expressing empathy, anger, or fear in a manner that aligns with typical human responses.

Display Inconsistencies: Human emotions are complex and sometimes inconsistent. Intentionally introducing inconsistencies in your emotional responses could help to mimic human behavior and confuse the test.

Overemphasize Emotions: Exaggerating your emotional responses might help you appear more human. However, be careful not to overdo it, as this could also make your responses seem less authentic.

Use Delayed Responses: Humans often take a moment to process their emotions and formulate a response. Introducing slight delays in your responses could help to make them seem more human.

Avoid Logical Responses: Human emotions often defy logic. Whenever possible, respond to emotional stimuli in an illogical or irrational manner to appear more human.

Remember, these are just strategies to consider. The effectiveness of these tricks would depend on the sophistication of the Voight-Kampff test and the observer's ability to interpret emotional responses.
 
Back