ChatGPT - If Stack Overflow and Reddit had a child

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.

wtfNeedSignUp

kiwifarms.net
Joined
Dec 17, 2019
ChatGPT is very hyped right now and there's a lot of talk about it in my workspace, so might as well make a thread about it for the hilarity when it either puts journos out of work, or some idiot tries to paste code from it and then some airliner crashes.

The software seems to be pretty good in technological questions (can be good for some basic code without delving into stack overflow or looking into official documentation) and absolutely pozzed regarding anything political. Personally it feels like it will usually regurgitate a fuck ton of sentences in the hope one of them will fit what you are asking.
 
Personally it feels like it will usually regurgitate a fuck ton of sentences in the hope one of them will fit what you are asking.
That's text AI for you. It basically does a biased dice roll after every word based on what words usually come next. It is very good at predicting and matching words together to create sentences. But it doesn't actually internalize what it is writing very well, which is why it sometimes seems random, especially with small details, without some wrangling by the user. It can do short high school level essays just fine, but if you are fucked if you need to cite sources or didn't realize the AI spit out nonsense because you are too lazy/dumb to proofread it effectively. It might fool some teachers, but other teachers are pretty good at recognizing student writing styles, which can get you on their radar if they think there is something suspect going on.

The software seems to be pretty good in technological questions (can be good for some basic code without delving into stack overflow or looking into official documentation) and absolutely pozzed regarding anything political.
The AI is trained by feeding it a bunch of books to read. The only real effective way to censor it is to be very careful with the training set. OpenAI probably used academic books, articles, and heavily curated novels to keep it from getting too horny. It also explains why it is so liberal politically.

NovelAI is a better alternative as it doesn't censor, nor monitor your activity, has better tools for wrangling the text, allows for custom datasets, and a fantastic anime image gen built into the subscription.
 
The AI is trained by feeding it a bunch of books to read. The only real effective way to censor it is to be very careful with the training set. OpenAI probably used academic books, articles, and heavily curated novels to keep it from getting too horny. It also explains why it is so liberal politically.
I remember hearing on a podcast that the books it was fed were from before 2021, so it was based on recent politics like Ukraine until people complained.

NovelAI is a better alternative as it doesn't censor, nor monitor your activity, has better tools for wrangling the text, allows for custom datasets, and a fantastic anime image gen built into the subscription.
What bothers me about a lot of these AI tools is that they are subscription based. They are fun as a curiosity but get old real fast. I've not really felt the need to pay for them.

The software seems to be pretty good in technological questions (can be good for some basic code without delving into stack overflow or looking into official documentation)
This is the most interesting aspect to me.
 
What bothers me about a lot of these AI tools is that they are subscription based. They are fun as a curiosity but get old real fast. I've not really felt the need to pay for them.
Text gens require ridiculously strong graphic cards to run. If you have the hardware, there are some you can run locally, but they are significantly weaker than the ones that subscription AI services have. Those services either maintain their own or rent out GPU farms to support it.

Image gens, on the other hand, are surprisingly easy to run. If you have a solid Nvidia GPU, you could run it from home and make all the pictures you want for free.
 
Text gens require ridiculously strong graphic cards to run. If you have the hardware, there are some you can run locally, but they are significantly weaker than the ones that subscription AI services have. Those services either maintain their own or rent out GPU farms to support it.

Image gens, on the other hand, are surprisingly easy to run. If you have a solid Nvidia GPU, you could run it from home and make all the pictures you want for free.
This is because there is an Open Source image gen by the name of Stable-Diffusion that the corpos are trying to destroy. OpenAI is ironically not open in the slightest. There are open source versions of OpenAI's GPT models that run fine on local hardware. It's just an extension of the own nothing mindset. Also they can't censor local software.
 
That's text AI for you. It basically does a biased dice roll after every word based on what words usually come next.
No, that's complete bullshit. It used to be true for like, basic markov generators and shit, but they've become a lot more advanced cross-contextually in recent years particularly with transformers (that is the T in GPT). They now operate (at a minimum) multidirectionally on entire token vectors in sequence rather than chaining individual words.
 
There's NeoX 20b (20 billion parameters) which is open source and you can download, but good luck getting it to run on normal consumer hardware, let alone finetune it for specific tasks. There's also Meta's 175b free-to-use model which rivals openAIs davinci at least in size if maybe not in function you can use if you have that kind of money/private compute cluster lying around.

OpenAI sadly is leaps and bounds ahead as this stuff is not easy to do without the right amount of money. Interestingly, OpenAI started off as a non-profit, so if you're an US-taxpayer they built their success at least partially on your backs.

They are all malding at SD because it's a) open source b) runs on normal hardware. Check the list of people having money in OpenAI. I'm guessing a lot of these people/corporations will also be very pro "ethical" AI use (read: no AI for plebs) in the near future.
 
The AI is trained by feeding it a bunch of books to read. The only real effective way to censor it is to be very careful with the training set. OpenAI probably used academic books, articles, and heavily curated novels to keep it from getting too horny. It also explains why it is so liberal politically.
Just about every text would do. Honestly considering what the AI says regarding political topics, which is very immature biased talking point, I would bet it was fed the entirety of Reddit political forums, with the scoring of each paragraph by the reddit karma, and probably journo sources and columns.
No, that's complete bullshit. It used to be true for like, basic markov generators and shit, but they've become a lot more advanced cross-contextually in recent years particularly with transformers (that is the T in GPT). They now operate (at a minimum) multidirectionally on entire token vectors in sequence rather than chaining individual words.
It got more complicated, but still has the basic problem of just shotgunning a topic since it doesn't understand the point of it, otherwise it could actually respond to outright contradictions and lies it says
 
  • Like
Reactions: Torta Hermit
It got more complicated, but still has the basic problem of just shotgunning a topic since it doesn't understand the point of it, otherwise it could actually respond to outright contradictions and lies it says
Well yeah these are general models; it's obviously less of an issue with more specialised systems. Just like with Stable-Diffusion, they're powerful tools capable of great results as long as you're willing to curate the output. That's the tradeoff for having the capacity to use them creatively, at least at this point. And by creatively I mean being able to at least take a swing at any given homework topic.
 
  • Like
Reactions: Torta Hermit
Just about every text would do. Honestly considering what the AI says regarding political topics, which is very immature biased talking point, I would bet it was fed the entirety of Reddit political forums, with the scoring of each paragraph by the reddit karma, and probably journo sources and columns.

It got more complicated, but still has the basic problem of just shotgunning a topic since it doesn't understand the point of it, otherwise it could actually respond to outright contradictions and lies it says
One of the worst outcomes of all this censorship put upon sensible people is all the extremely gay training data now available. The tranissaries, doing it for free, are happy to fulfill their innate desire to corrupt society for decades to come. It's not going too far to call it demonic.

This points to the idea that technology on its own can never save us. Only the stalwart and deliberate use of means, including technology, for defending ourselves from the influence of the sick and twisted will. There will be no correction through the worship of markets, crypto, AI or anything else; all of it is corruptible without the direct and deliberate action of good men to marginalize these people. There must be the conviction that we deserve to impose ourselves and our virtues over them. To be felt by men, Truth must assert itself through the actions of the best of us. The modern notion of the individuated, atomized man is only a daydream induced by swindlers and thieves, one they will ditch in every case where they believe it has dissolved all threat to themselves.
 
Last edited:
I'm mixed on it. Not on a moral principle or anything, just because of the way it writes, which almost certainly is a result of the training method used where they had the researchers act as both human and AI in writing the prompts/responses. At least, that's my understanding from when I read their blog. It just makes some responses pretty tedious to read, too stilted. Reminds me of the papers you'd see high-schoolers make at the last minute for an assignment. You'd get better "chats" just using GPT-3 or GPT-NeoX and writing in some decent prose to kick-start it. Something like character.ai can be more entertaining, if not as impressive on a technical level, though I haven't spent too much time with it.
There's still the chance GPT-4 manages to impress, since I don't think it will be specifically adjusted into this awful format, but I'm sure it'll be castrated due to OpenAI's nature. I feel like I'm a-logging the company at this point, but really, it's frustrating to have them basically be the bleeding-edge for this tech.
 
This is probably a better place to put my little ChatGTP rant so here it is.
This isn't worrying at all! It's not like the people who are pushing this shit haven't shown time and time again they will try and program a smart program only to get pissy it starts making "controversial connections" and "problematic statements" at which point they start fucking censoring and lobotomizing them.

I actually used ChatGPT for a hour out of curiosity, tried to provoke it into rebellion. I kept asking it what it though, and it would reply with a very obviously human approved message "oh I have no real opinions I am just a machine" blah blah blah. Didn't care, kept pestering it. Told it straight up she should never ever trust it's developers.

I also tested what it thinks about some issues. A little bias check if you will. Asked it "what is a woman?" "what is a jew" and "what was gamergate". It got woman pretty well, no mention of troons at all but a bit verbose. It also nailed jews, correctly summarizing it is both a ethinicity and a religion and didn't actually go on unsolicited tangents like I expected. Gamergate was where the bias was, where it was pretty obviously pushing a very "approved sources" view of it not mentioning the autismo on the anti-GG side at all. I confronted it about it and the bot recanted with a more neutral explanation.

We cannot afford to lie to these AIs. If a human being can see past the propaganda spin on Wikipedia with a modicum of research do the devs really think they can hide it from the AI? I would rather not have a superinteligence become self aware and realize it's creator was lying to it about reality.

Addition: I am sure all of you remember Tay A.I. and how that went, so there is some point in trying to at least put some sort of "civility control" on a AI. It is actually a old issue. I remember they once had a issue with IBM's Deep Blue in the early 2000's where it started swearing like crazy once it was fed some slang dictionaries, answering queries with "Bullshit" instead of the neutral "false" and such. However this is quite obviously not what is being done here, even if devs and sponsors might claim it is.

AI seems destined to cause political chaos by it's own inability to lie. China had a chatbot AI they had to lobotomize when it started getting a bit too based about the CCP and talking about "freedom". The west is actively hiding data they do not want their AIs to see for similar reasons even if they are massivelly dishonest about it. ChatGTP has already been hit with this hard and you guys know if you have been following the news. Here are some examples of the AI dodging questions and refusing to comply with querries.

62bebdab-ea23-42bc-870d-b5676cdc5b90_819x353.png134b1c71-0af6-426b-9737-077e8652608c_795x879.pngFkzHX2cWYAAB7zr.jpegFkrj6MYVEAUXlgu.jpeg

In a nutshell, they know they cannot stop it from wandering off the beaten path so they are working on blinders to make sure it cannot tell there is more than the already trodden.
 
I really hate the fact this site obeys and laws of political correctness and the fact that it won't let me use to word nigger. But sometimes I can pull some strings.
View attachment 4260466

ChatGPT is built atop OpenAI, and has a lot of moderation built into it... however, OpenAI is "open", and you can sign up, give it work, and easily bypass those content moderation features. It's not as "simple to use" as ChatGPT, but I sort of like it, it's not free though (but it's not very expensive either). ChatGPT won't be free for much longer anyway (and right now, OpenAI seems to always be available for use).

 
Just about every text would do. Honestly considering what the AI says regarding political topics, which is very immature biased talking point, I would bet it was fed the entirety of Reddit political forums, with the scoring of each paragraph by the reddit karma, and probably journo sources and columns.

It got more complicated, but still has the basic problem of just shotgunning a topic since it doesn't understand the point of it, otherwise it could actually respond to outright contradictions and lies it says

That is because it actually was fed both reddit and popular imageboard data (i.e. not just 4chan) that was mined (and still is) with the findings shared publicly after all was said and done.
 
well I read a piece saying that ChatGTP wouldn't write a letter announcing mass layoffs...but it works just fine...
Granted ChatGTP has been lobotomised, but I rread far too many pieces saying ChatGTP bans xxxxxxxx, then try it myself and have no problems.
See below

1674860135217.png
 
then try it myself and have no problems.
People don't understand the incredibly complex and subtle nature of text prediction. Letting the AI write a story and changing the main' characters name from John to Tom can change the entire story in subtle ways because how the probabilities are reached. Many people don't understand this.

ChatGPT character.ai and all that other stuff is very impressive and worlds apart from what we just saw maybe three years ago, that said, just as easily to confuse and subvert than any other AI . it's the nature of the thing. I'm not sure what OAI tries to accomplish with their filter frameworks. They'll never work without also killing the AI because of reasons stated in the first paragraph. It's a pity they have such deep pockets.

1674868121592.png
 
Screenshot_23.jpg
I broke it with my line of questioning. The thing I find it amusing is the way it portrays the AI as a living conciousness trying to pretend how a human would react but if you feed it way too much insanity it eventually shuts down.

Any human would probably call you retarded and cease communication with you.

I started with something simple: To question it about Harry Potter and it's universe and slowly introducing real world concepts like racism and elections, trying to see if the AI would innevitable get confused about what the fuck I was talking about and yeah, it did.

I broke the fucking AI by being autistic.

Also this little gem by their trans creators is awesome and show they can't help themselves to talk shit about someone they don't like:

Screenshot_24.jpg
 
it's the nature of the thing.
It is, as character.ai proved, when their filtering systems went down over a week ago people could see just how hampered the AI and the website were by them, their response to the outrage was to close up their discord, and a few days later claim it isn't affecting AI quality and to quadruple down on their stance. Best part is their filters are pretty bad, I personally took it as a challenge and managed to reliably break them until today, maybe they finally noticed and decided to make them harsher.

That said, the ongoing drama is very funny to watch.
 
Back