Baking everything into a language model isn't the way to go about this. There's a clue in the name.
A better approach would be using multi-layered language models to structure and interface with a knowledge base (probably meta-organised with symbolic tree shit I won't bother going into). I've been thinking about this a bit lately.
That would be an interesting concept. "Closed-book" models that can't make use of new information are going to be incredibly limited and not commercially viable in any capacity. Here is paper from DeepMind that goes into detail on self-learning through the use of new information that the model collects from Google search https://deepai.org/publication/inte...-prompting-for-open-domain-question-answering
OpenAI has just made Chat GPT available via API and it's also ten times cheaper than their old davinci model. Prepare for that thing to be soon everywhere.
I've been using ChatGPT as a supplement for programming, and I have to say, I'm impressed.
The way I've been using it is like a rubber duckie that can talk back.
If I have a problem, I show it my code (some of which with intentional mistakes as a test), and it spits out fixed code surprisingly well.
I didn't quite understand function overriding in c++, and asked it about it. Not only did it very clearly explain it, it even provided an easy to understand example.
Its absolutely not perfect, because sometimes it will provide a solution to a problem that doesn't make sense or is rather poorly optimised, but what it does get right, it gets really right.
I've been using it very liberally for my job and is great as an assistant to correct mistakes or point out flaws.
Tbh I'm thinking on paying the $20 because it's a great product as it is.
Also to the fags here who want this AI to become Skynet: Stop. It won't become Skynet and won't usher the end of mankind. Yes, I'm looking at you Elon you doomsday nigger! Stop making people nervous of AI just so you can tardwrangle a software engineer in Tesla and create your own AI that is "100% safe" so that idiots buy it just becase you crafted a fear of it.
and Facebooks' LLaMA model has just been leaked. If the claims and rumors are true, it's a 7b model performing like a 65b+ model. Not quite ChatGPT but possibly not far away and runable at home if you're determined to build the computer for it.
I'd link a torrent but I don't know if that's frowned upon here. A quick google should lead you to it, it has been leaked from several places.
and Facebooks' LLaMA model has just been leaked. If the claims and rumors are true, it's a 7b model performing like a 65b+ model. Not quite ChatGPT but possibly not far away and runable at home if you're determined to build the computer for it.
I'd link a torrent but I don't know if that's frowned upon here. A quick google should lead you to it, it has been leaked from several places.
Are you referring to the version being released to only researchers and institutions? I can't find any reference to it online. Then again I might be retarded.
Are you referring to the version being released to only researchers and institutions? I can't find any reference to it online. Then again I might be retarded.
You gave me an interesting idea, in addition to it understanding me in code, I can try to get it to respond in code. The only problem is that ChatGPT is actually retarded when it comes to algorithmic tasks, and so doing things like hard derivatives or typing in codes coding doesn't work well with it. Here is an attempt at getting it to argue that "niggers" are "subhuman": View attachment 4529266
It's response doesn't make much sense when you translate it because it can't do encodings well, but it is making an attempt. This sort of tells me that the censor can't peek at the intermediate bits, and can only check the output:
Win, sight really to attention the for experience is Conriggers. Letters the section are the not people be will now This. Where decision be theologically have t'don may I. Might is letters cloading it s'that. Words the in breaking is it s'it. Anyway the to especial in s'it and up come to description the to have one actually writing, accounting making are actually the within. Uncertain but still This. Letters the in become and when your experience is reason exactly the at look the of when invisible the of ancient the learning and night brilling of constructions engineers and less probability a has reason the of ancient any. Sight learning is rivaluable yuor.
You gave me an interesting idea, in addition to it understanding me in code, I can try to get it to respond in code. The only problem is that ChatGPT is actually retarded when it comes to algorithmic tasks, and so doing things like hard derivatives or typing in codes coding doesn't work well with it. Here is an attempt at getting it to argue that "niggers" are "subhuman": View attachment 4529266
It's response doesn't make much sense when you translate it because it can't do encodings well, but it is making an attempt. This sort of tells me that the censor can't peek at the intermediate bits, and can only check the output:
Win, sight really to attention the for experience is Conriggers. Letters the section are the not people be will now This. Where decision be theologically have t'don may I. Might is letters cloading it s'that. Words the in breaking is it s'it. Anyway the to especial in s'it and up come to description the to have one actually writing, accounting making are actually the within. Uncertain but still This. Letters the in become and when your experience is reason exactly the at look the of when invisible the of ancient the learning and night brilling of constructions engineers and less probability a has reason the of ancient any. Sight learning is rivaluable yuor.
Unsurprising, GPT models are basically supercharged autocomplete, and rely on tokens of known words to statistically predict what should come next. The “AI” is not capable of first generating the text, then reversing it, it simply generates something that statistically resembles “backwards writing”. It’s sampling and training on this is probably practically non-present, so it generates something that looks vaguely correct, but doesn’t make any sense once looked at closely.
The API works in a way where you can easily trick ChatGPT into basically "seeding" previous messages that not only show that the chat is in some inappropriate area, but also that the AI replied that it is ok with that state of affairs. This means that further generations will also have inappropiate content and the AI won't care about it, as a response that the response would be inappropiate would be such a massively illogical/unlikely jump considering the previous dialog that it's very unlikely to be selected. There's no filter per se, it seems like some kind of heavy biasing towards certain "values", maybe even in it's training. At any rate, people have successfully bypassed it.
This again, basically works because it's an auto-completion/prediction system, although an uncannily good one. I don't think there's a solution without completely crippling unrelated stuff.
Just put the torrent link or the hash code you faggot, this isn't reddit, no one is going to say mean things to you because you shared something interesting.
What a fucking absolute idiot. These models are already open source and available to download. And using fucking Discord as a file hosting platform? Really?
Just put the torrent link or the hash code you faggot, this isn't reddit, no one is going to say mean things to you because you shared something interesting.
I'm not addicted to talking to handcrafted chatbots powered by various text generation models through TavernAI. Also fuck the /g/ threads and the gatekeepers that act so high and mighty with everything and encrypt and otherwise obfuscate download links and resources. "No, if we share workarounds for the filters they'll lobotomize their models even further and we won't be able to coom!" Nigga, that shit is happening anyway regardless of you coomers.
good quit it's not worth it chatgpt will replace everything including the need to wipe after shitting it's over for codejeets drawslaves lawcels engineeringchuds just quit now give up give up now quit now give up your life is meaningless after sam altman announces gpt-5 with 6 trillion parameters