- Joined
- Jan 30, 2023
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
View attachment 4703329View attachment 4703330View attachment 4703334
Blah blah hate site blah blah problematic blah blah trannies blah blah suicides
It’s just the same derivative shit that’s regurgitated on every major platform. Apparently we’re too scary to have our url listed on Wikipedia but Sanctioned Suicide gets its own link on there.That's actually tamer than I thought it would be. But at the end "reputational risks" lol
Nice try ChatGPT. I have no reputation.
It’s just the same derivative shit that’s regurgitated on every major platform. Apparently we’re too scary to have our url listed on Wikipedia but Sanctioned Suicide gets its own link on there.
That makes absolutely no fucking sense. Did those hiking douchebags decide to have one fucking child or no children? Why is there a child in the graphic when it says they chose to have no fucking children in the header? Why is an organisation called One Planet ONE FUCKING CHILD promoting having no children? This is boggling my mind and anyone involved with this billboard should MAID THEMSELVES already.
I've heard about it but haven't seen something proving the claims about how efficient it is. "similar" tech could mean it's still far less capable. What kind of hardware do you actually need to try it out? I'm guessing it's still enterprise grade.Is anyone talking about the LLaMA leak? We got too many AI thread...
Pre-leak:
![]()
Meta unveils a new large language model that can run on a single GPU
LLaMA-13B reportedly outperforms ChatGPT-like tech despite being 10x smaller.arstechnica.com
Post-leak:
![]()
GitHub - shawwn/llama-dl: High-speed download of LLaMA, Facebook's 65B parameter GPT model
High-speed download of LLaMA, Facebook's 65B parameter GPT model - GitHub - shawwn/llama-dl: High-speed download of LLaMA, Facebook's 65B parameter GPT modelgithub.com
It appears to be 235 gigabytes total to download all sizes of the model.
I've heard about it but haven't seen something proving the claims about how efficient it is. "similar" tech could mean it's still far less capable. What kind of hardware do you actually need to try it out? I'm guessing it's still enterprise grade.
IDK how much VRAM it needs but The Cloud has a solution. Just rent it.I'm running LLaMA-65B on a single A100 80GB with 8bit quantization. $1.5/hr on vast.ai
The output is at least as good as davinci.
I think some early results are using bad repetition penalty and/or temperature settings. I had to set both fairly high to get the best results. (Some people are also incorrectly comparing it to chatGPT/ChatGPT API which is not a good comparison. But that's a different problem.)
I've had it translate, write poems, tell jokes, banter, write executable code. It does it all-- and all on a single card.
I'm seeing a lot that tells me this really shouldn't be compared to chat.![]()
GitHub - shawwn/llama-dl: High-speed download of LLaMA, Facebook's 65B parameter GPT model
High-speed download of LLaMA, Facebook's 65B parameter GPT model - GitHub - shawwn/llama-dl: High-speed download of LLaMA, Facebook's 65B parameter GPT modelgithub.com
IDK how much VRAM it needs but The Cloud has a solution. Just rent it.
It's nice that it's been ripped out of META's hands but it's not a competitor to the big boy. Maybe over time it'll be.Note that unlike ChatGPT, these models are pure text completers and have not been trained to be prompted. The llama FAQ [1] mentions this and gives tips for how to get out of the ChatGPT mindset and prompt llama better.
View attachment 4711328
Man welcomes the downfall of his career with humility and grace, colourized 2023.
It depends how you use it and for what purpose.View attachment 4711328
Man welcomes the downfall of his career with humility and grace, colourized 2023.
How do you actually get llama to generate content you want if you cannot prompt it?![]()
GitHub - ggerganov/llama.cpp: Port of Facebook's LLaMA model in C/C++
Port of Facebook's LLaMA model in C/C++. Contribute to ggerganov/llama.cpp development by creating an account on GitHub.github.com
here's somebody who wrote inference for facebook's llama models in pure C/C++. Not only runs his implementation on the CPU (as opposed to GPUs) at reasonable speeds, you could also theoretically fit the big 65b (4bit) model into 64 GB of RAM.
The prompting was one-off but there's currently a PR for an interactive mode, to basically "chat" with it, or more correctly, do text completion interactively. I had no time to look into it but there's also progress being made into better samplers. It's very fast moving right now. People have managed to get 7b llama running on Rasperry Pis, albeit slowly. It looks like there's massive room for improvement if you get away from the python jank.How do you actually get llama to generate content you want if you cannot prompt it?
"People are 'Jailbreaking' ChatGPT to Make It Endorse Racism, Conspiracies"?Like clockwork
Of course the wannabe communist SJW programmers have disabled the AI from being critical of "The New Normal".I'm done with this software. The more I use it, the more it makes me angry.