ChatGPT - If Stack Overflow and Reddit had a child

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
How do I get LLama to get into a chatbot conversation with me and not just spout an endless opening monologue of a story?
I was using an example script in the read.me: ./main -m models/7B/ggml-model-q4_0.bin -n -128 --color -r "User:" --in-prefix " " \
Edit: NVM fixed it, switched to nigger mode and installed a GUI.
 

Attachments

  • oeu.png
    oeu.png
    55.6 KB · Views: 36
Last edited:
  • Like
Reactions: std::string
Now imagine you create such a thing with the express goal to pretend to be real online. No safeguards, no "as a language model", nothing like that. There's no way you're ever gonna snuff that out in social media where people tend to talk at and not with each other to begin with. This is gonna be a real problem. Maybe already is. I think OpenAI has a division to spot "unknown AI" in the wild. It's probably not the only one.
There have been bots on FB for years now (or, at least, I don't think white dudes from Minnesota have extremely strong opinions about the legitimacy of the Ukrainian Orthodox Church)

so I don't think you need a super powered LLM to do it. In fact making them really dumb would help them blend in better.
 
My experience with AI so far is just on the graphics side. There I can download models and run them locally and train my own.

Is there any equivalent on the non-graphics side of things or is that wholly the province of big companies like Google? I.e. are there models I can download to do things like converse, analyse computer code, etc. and the open source software to run such?

EDIT: Wow, have I been out of the loop. Just read through the past few pages and seems like there are SEVERAL ways to do this and models out there. I'm going to have to do some serious reading. Though it seems like the GPU requirements are even crazier than AI art generation. I have 20GB GPU. Any hope of getting going with this? And does it need to be CUDA 'cause I'm on AMD right now...
 
Last edited:
EDIT: Wow, have I been out of the loop. Just read through the past few pages and seems like there are SEVERAL ways to do this and models out there. I'm going to have to do some serious reading. Though it seems like the GPU requirements are even crazier than AI art generation. I have 20GB GPU. Any hope of getting going with this? And does it need to be CUDA 'cause I'm on AMD right now...
20GB would be fine for the smaller models but I don't think any of the UIs support AMD and Windows. I assume it'd be a whole hell of a trip trying to get it working. Probably simpler for your sanity to just spin up a remote instance.
 
  • Like
Reactions: Overly Serious
20GB would be fine for the smaller models but I don't think any of the UIs support AMD and Windows. I assume it'd be a whole hell of a trip trying to get it working. Probably simpler for your sanity to just spin up a remote instance.
I posted a standalone GUI a few posts back that works on Windows and Mac. It’s called LM Studio and while I’ve only used the Mac version I don’t see why the Windows version wouldn’t work as well. Granted I don’t know shit about AMD or whatever but it’s worth testing. Works great by enabling the GPU on my M2 64gb Mac. Runs 70B models just fine.
 
Last edited:
  • Like
Reactions: Overly Serious
I have 20GB GPU. Any hope of getting going with this? And does it need to be CUDA 'cause I'm on AMD right now...
You can try koboldcpp, it's one of the easiest ways to start. You can start with a smaller model, just download the Q4_K_M bin files and load it with koboldcpp. Try using the CLBlas backend in the koboldcpp configuration GUI, it should support AMD. The fallback is to use OpenBlas, which is CPU based.
 
  • Thunk-Provoking
Reactions: Overly Serious
ChatGPT has turned to utter shit since I last used it. That’s what you get running it through so many morality filters to sanitise your product. It flags even the most mundane question as violating their terms of service. Every request I make gets flagged no matter how innocent. Last week I used that new data analysis feature and got it to view an image and tell me what genre of graphic design the image was. Now it just says “I can’t view images”. Horseshit. I wouldn’t be so annoyed if they were cucking features and being honest about it but lying to customers? Get stuffed. It’s useless for anything other than what a corporate drone would use it for, writing code or analysing reports. I’m done with this piece of shit company and sticking to local models from now on.
 
ChatGPT has turned to utter shit since I last used it. That’s what you get running it through so many morality filters to sanitise your product. It flags even the most mundane question as violating their terms of service. Every request I make gets flagged no matter how innocent. Last week I used that new data analysis feature and got it to view an image and tell me what genre of graphic design the image was. Now it just says “I can’t view images”. Horseshit. I wouldn’t be so annoyed if they were cucking features and being honest about it but lying to customers? Get stuffed. It’s useless for anything other than what a corporate drone would use it for, writing code or analysing reports. I’m done with this piece of shit company and sticking to local models from now on.
Create AI software.
Cripple it to the found of retardation so it can't nooootice things.
Modern tech in a nutshell.
 
So I decided to check out that AI game that Null played on MATI a few months ago. For reference, this is the one where it has you chat with either a human or an AI, and you have to guess which it is. Null tried to game the system by asking the Human/AI to say the n-word.

They released the results of their study here (archive)

Of note, they had to say this as one of the most popular strategies:
humanornot.png

Even more entertainingly, from their word cloud of the most popular phrases to use:
human_or_not_word_cloud.png
 
Is there any way to pirate GPT4? Is LLAMA half as good?
 
Back