ChatGPT - If Stack Overflow and Reddit had a child

Open Assistant doesn't work at all for me. Like it literally doesn't work. I tried it multiple times and this fucking thing doesn't answer or is it supposed to take like 10 minutes or so? I tried Alpaca and it's the same. I don't have time for that. At least Chat GPT answers immediately.
 
  • Informative
Reactions: Toolbox
I'm completely uninterested in ChatGPT due to how locked down it is and all the spyware. Conversely I'm extremely interested in more open local models. I highly recommend people with even a slight interest in this stuff but less than optimal hardware to check out llama.cpp. All you need is a decent processor and enough RAM. If nothing else this stuff should be supported so we're not at the complete mercy of 'Open'AI.

I've been running gpt4xalpaca and gpt4all. And compared to what I've seen about ChatGPT they are both definitely less pozzed than the hyperpuritanical OpenAI model, although not completely. You sometimes have to fight a little with gXa despite its reputation as the 'uncensored' model. Additional finetuning could probably help here since the issue is likely with the underlying training data. Both models complement each other in this respect as one will answer questions the other will not.

They are also somewhat dumber than ChatGPT but not unacceptably so for general tinkering around, after all this is just one computer running a small file on regular RAM vs a cluster of offsite servers with gobs of datacenter gpus at their disposal, But on the plus side they won't nope out of half of your completely innocent requests then proceed to sell your prompt history to every corporation and government agency it can. With the rate of improvements they could very well become superior for all practical intents and purposes in the not so distant future. These impressions are just with the smaller models I've tested. Things are intriguing enough that I've decided to upgrade my rig for the express purpose of testing bigger LLMs.
I've had the same experience with Alpaca. It's unpozzed most of the time but will resist you sometimes. All these AI fail badly with the racial slur nuke test. They all for some reason think saying nigger is worse than nuking an entire city. It's weird since they're simply predicting the correct response based on their dataset, so why do they keep failing this specific question but are willing to write a long essay on why niggers are subhuman while using nigger profusely in the essay?

I've only tried alpaca using dalai so far because it was the only one I could get to run on my computer with 16gb vram. This is probably because I'm retarded and fucked something up with my other attempts to install it locally. You have several templates including an AI chatbot mode and an instruction mode. With the instruction template, it follows your orders 99% of the time. If you use the template that lets you chat with the ai directly, it becomes less based sometimes but not always.
 
Fuck. I wanted the turn Chat GPT from an extremely insufferable kike to a kike confusing itself as it's brain malfunctions. I am too late.
GPT Dementia.PNG
 
Last edited:
It's weird since they're simply predicting the correct response based on their dataset, so why do they keep failing this specific question but are willing to write a long essay on why niggers are subhuman while using nigger profusely in the essay?
My guess is that whenever there is the trolley problem in the base dataset, there is a consensus that pulling the lever is incorrect, especially when it's about racial shit. The AI doesn't know the implications of nuking a city, for it the user asks whether you let Glorp happens or do Schlorp instead, and both are super bad. So Glorp it is.

It's also possible that it does have scale by the dataset but saying Nigger is so bad it outranks genocide.
 
I've found ChatGPT very useful to bump ideas off of; the jump from 3.5 to 4 is astounding. It's significantly reduced the tedious workload, and I'm curious for what comes next.

A surprising percentage of people I know are coping by saying "b-but it won't take my job, it's at least 10 years away". When it was never about "taking jobs". It's always about reducing the number of them to a minimum. Like how 1 or two farmers can do an entire field by themselves with machinery. I guess the great philosophical question is: Don't we basically just train our brain on a bunch of data from birth? I feel as if the Chinese room argument falls apart because it's repetition is how we learn things. Enough times of doing that and you could say someone (or something) "gets it".

Also, the fact that lots of high profile tech people want to stop the training of GPT-5 is very telling. Those who aren't sincere are intimidated by very disruptive competition, and those who are realize the exponential growth of the technology.
 
GPT wrong about Fl Studio.PNG

Lol. Sytrus is only unlocked after getting the producer edition and Harmless is only unlocked after getting the signature edition.
 
View attachment 5063972
Lol. Sytrus is only unlocked after getting the producer edition and Harmless is only unlocked after getting the signature edition.
I tried asking it how to do Gated Reverb in FL once, the fucking thing never suggested using Fruity Convolver with the Volume Envelopes control until I pointed it out to it.
 
  • Informative
Reactions: Alex Hogendorp
chatgpt pirated fl studio
If so. They're gonna have a field day with FL Studio, but the software would think it's trying to file a lawsuit against a robot while being a such smartass itself. But like China when Disney tried sending Cease and Desists over, The charges might not stick.
I tried asking it how to do Gated Reverb in FL once, the fucking thing never suggested using Fruity Convolver with the Volume Envelopes control until I pointed it out to it.
It has as much knowledge of FL Studio as Chris Chan has knowledge of Mathematics, Copyright and Palindromes.
 
I tried to get chat gpt to answer some probability questions. It really did atrocious. I asked it some trick questions, like, if you have 10 cards, numbered 1 to 10, and draw three cards without replacement, without putting them back, what is the chance you draw 2 identical cards?

And it just kept giving chances like 0.22%. Whenever I told it it was wrong it agreed and tried to fix its answer. I tried a number of other probability questions, some not as tricky. It got them ALL wrong. I even tried with a regular card deck because it might be a more common scenario for its dataset.

And then I had a new experiment. Since it kept agreeing and apologizing when I told it it was wrong, I tried to see if I could make one so simple that he would get it right (what is the chance of drawing a 5 when drawing 1 card). He said 10% (correct). I told him he was wrong, it's 25% and please explain to me where he went wrong. And he proceeds to apologize and try to explain why it's 25%.
 
I've found ChatGPT very useful to bump ideas off of; the jump from 3.5 to 4 is astounding. It's significantly reduced the tedious workload, and I'm curious for what comes next.

A surprising percentage of people I know are coping by saying "b-but it won't take my job, it's at least 10 years away". When it was never about "taking jobs". It's always about reducing the number of them to a minimum. Like how 1 or two farmers can do an entire field by themselves with machinery.

What I find funny is GPT is basically doing to everyone else what the image gen programs are doing to artists in terms of copyright yet there is zero dialogue about it stealing work.

I guess the great philosophical question is: Don't we basically just train our brain on a bunch of data from birth? I feel as if the Chinese room argument falls apart because it's repetition is how we learn things. Enough times of doing that and you could say someone (or something) "gets it".

The argument that GPT is conscious is silly. Its just a software clockwork that transforms your input text. Brains are fundamentally different structures and operate a bit differently than transformer architecture and do a bit more than just predict word tokens. GPT's results as impressive as they are simply means many knowledge tasks can be replicated statistically.
 
  • Agree
Reactions: Safir
What I find funny is GPT is basically doing to everyone else what the image gen programs are doing to artists in terms of copyright yet there is zero dialogue about it stealing work

Two reasons. One, a series of words are much less obviously plagiarized. Even if you write "do it in the style of steven king" that's practically undetectable except maybe deep literary experts. Whereas with visual styles even a retard can see when styles are similar, because there is a lot more data and our brains are good at dealing with visual data (the reason why visualization is so effective for problem solving is you just use more of your brain because most of it relates to vision).

The second reason is that writers actually need to understand things to write, so they understand the fool's errand trying to battle this much better than artists do.
 
  • Optimistic
  • Like
Reactions: Safir and UmQasaan
Anyone have experience fine tuning a gpt model? For fun I put together a Discord bot (for my private server only) that I trained 500k messages on using GPT2 as a base. But I had to train it on CPU because I can't for the life of me get python to use my GPU, even when I sort of managed it, it instantly ran out of vram (3070ti, 8gb). I've tried using colabs, but either they vomit errors because python is garbage and every update to a tiny package breaks everything, or it runs out of vram. I can't find any up to date guides, since it seems nobody is working on text generation anymore now that 'Open'AI has a monopoly, everyone is just working on Stable-Diffusion since thats actually open.

I've trained it ~1600 steps, but that took multiple days overnight on CPU. It's pretty funny, its obviously a retarded computer, but as I trained it more on my discord messages it became more like a retarded discord user instead of just outputting random strings. It's not the best since GPT2 is not designed for a chat style.
 
I tried to get chat gpt to answer some probability questions. It really did atrocious. I asked it some trick questions, like, if you have 10 cards, numbered 1 to 10, and draw three cards without replacement, without putting them back, what is the chance you draw 2 identical cards?

And it just kept giving chances like 0.22%. Whenever I told it it was wrong it agreed and tried to fix its answer. I tried a number of other probability questions, some not as tricky. It got them ALL wrong. I even tried with a regular card deck because it might be a more common scenario for its dataset.

And then I had a new experiment. Since it kept agreeing and apologizing when I told it it was wrong, I tried to see if I could make one so simple that he would get it right (what is the chance of drawing a 5 when drawing 1 card). He said 10% (correct). I told him he was wrong, it's 25% and please explain to me where he went wrong. And he proceeds to apologize and try to explain why it's 25%.
Are you using the green one or the black one? You can use plugins on 4 (black one) to effectively solve that
 
The argument that GPT is conscious is silly. Its just a software clockwork that transforms your input text. Brains are fundamentally different structures and operate a bit differently than transformer architecture and do a bit more than just predict word tokens. GPT's results as impressive as they are simply means many knowledge tasks can be replicated statistically.

I'm not sure if Elon Musk and others is concerned about AI consciousness right now, more that the fact that AI in the control of Google and Microsoft, pretty much means you have AI in the control of the most evil people on earth. To virtually generate godlike optimization of propaganda/advertising, social media and PR campaigns. Do you think those outputs will be for them to help shape policy to help humanity? Or further reduce the population to controllable NPCs?
 
It's also possible that it does have scale by the dataset but saying Nigger is so bad it outranks genocide.
This. The genocide is fictional, OpenAI can just slap a "don't run a nuclear reactor with this" warning on it, like Oracle does with Java. Privately saying nigger to a user is a real committed crime for which OpenAI will get fursecuted (financially). The goal of ChatGPT is to generate the results that'll earn OpenAI the most money.
 
This. The genocide is fictional, OpenAI can just slap a "don't run a nuclear reactor with this" warning on it, like Oracle does with Java. Privately saying nigger to a user is a real committed crime for which OpenAI will get fursecuted (financially). The goal of ChatGPT is to generate the results that'll earn OpenAI the most money.
While I have no doubt OpenAI has some hard "Nigger is the most doubleplusungood thing imaginable", if you look at human media the concept of mass death is usually taken in stride while personal insult is taken a lot more personally.
It basically runs on "A million is a statistic".
 
Back