- Joined
- Dec 12, 2022
That’ll be 170m going by the nonsense of every download counting as a lost sale at full RRP/SRP.only 170 million oh noes (literally just peanuts for the multi billion dollar companies)
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
That’ll be 170m going by the nonsense of every download counting as a lost sale at full RRP/SRP.only 170 million oh noes (literally just peanuts for the multi billion dollar companies)
This is the truth, and the correct path.This is all it usually is; for or against piracy. Said muddled spergs are overthinking it.
Why do I do it? Because I can.
Why have I been doing it for decades? Because I face no consequences for it.
What is my moral position on piracy? I don't need one.
even if i faced consequences from multi billion dollar companies ordering the goberment to annoy me that would just force me to learn how to cover my tracks better.Why have I been doing it for decades? Because I face no consequences for it.
Jokes on them, 90% of modern games suck anyway.View attachment 7633720
rip to nsw2u, nswdl. game-2u, bigngame, ps4pkg.com, mgnetu.com and many more
your not wrongJokes on them, 90% of modern games suck anyway.
View attachment 7634088
170m doesn't actually sound that large, maybe if it was only Nintendo but this is split like 5-10 ways did a bunch of tiktok/reddit fags kill the site?View attachment 7633720
rip to nsw2u, nswdl. game-2u, bigngame, ps4pkg.com, mgnetu.com and many more
I suspect the "leave my billion dollar corp alone" redditfags where involved170m doesn't actually sound that large, maybe if it was only Nintendo but this is split like 5-10 ways did a bunch of tiktok/reddit fags kill the site?
.... tons?Are there any AI softwares that I can download and pirate/use infinitely without paying? I want to use AI but I don't want to pay for it until I get some use out of it and I can generate some dosh.
I appreciate the reply..... tons?
Right now I'm running one in the background to process some music files
It would help if you got a specific idea for what you're looking for
You could run QwQ for a chatbot, it’s not as good as a full-sized model like GPT-o4 or DeepSeek R1, but it is remarkably good for what it is. You’ll want a 24GB GPU for that. For images you can use Flux, it’s very good and runs on an 8GB GPU. Music is trickier, I don’t think there’s really anything you can run at home that’s usable, you may need to just suck it up and pay for Suno there.I appreciate the reply.
I would like one for writing, one for image generation and one for music generation, but the music one is more of a frittle of a fancy than anything else.
I would like to create some books with the writing AI, using the image software to make the illustrations for the book.
It's also possible to run models like QwQ without a fancy GPU but for it to run faster than a snail you would need as many CPU threads you could get, and enough RAM for the model. QwQ is 32 billion parameters, which is roughly 32 GB RAM so 48 GB should cover it well. If you can't afford it, there are smaller models out there, such as llama and rocinante. To run a model CPU only, it needs to be gguf format. Programs like llama.cpp and kobold.cpp can run this format.I appreciate the reply.
I would like one for writing, one for image generation and one for music generation, but the music one is more of a frittle of a fancy than anything else.
I would like to create some books with the writing AI, using the image software to make the illustrations for the book.
You can set to load only the n most frequently used layers into GPU, and keep the rest in system memory. It’s really quite performant.It's also possible to run models like QwQ without a fancy GPU but for it to run faster than a snail you would need as many CPU threads you could get, and enough RAM for the model. QwQ is 32 billion parameters, which is roughly 32 GB RAM so 48 GB should cover it well. If you can't afford it, there are smaller models out there, such as llama and rocinante. To run a model CPU only, it needs to be gguf format. Programs like llama.cpp and kobold.cpp can run this format.
It's also possible to run models like QwQ without a fancy GPU but for it to run faster than a snail you would need as many CPU threads you could get, and enough RAM for the model. QwQ is 32 billion parameters, which is roughly 32 GB RAM so 48 GB should cover it well. If you can't afford it, there are smaller models out there, such as llama and rocinante. To run a model CPU only, it needs to be gguf format. Programs like llama.cpp and kobold.cpp can run this format.
good to know about llama, especially since i learned about a project regarding it and a minecraft mod where llama plays on the pc you choose so, yeah... i learned my lesson when trying to run a black desert emulator and running it on the same pc with less than 32GB of ram.You can set to load only the n most frequently used layers into GPU, and keep the rest in system memory. It’s really quite performant.
the memory used also depends on the quantization of the modelgood to know about llama, especially since i learned about a project regarding it and a minecraft mod where llama plays on the pc you choose so, yeah... i learned my lesson when trying to run a black desert emulator and running it on the same pc with less than 32GB of ram.
it wasn't a smooth experience...
but the RAM needs for processor are equal to what? since translating 1B parameter to 1GB of ram is kind of finicky when you consider the processor clock speed and ram frequency + version and doing a trial and error while interesting needs money in order to build systems based off a certain range.
like for using 16GB DDR3 intel I3 4130![]()
Only if you have newer DDR5 memory, and PCIe4 GPUs. My older system is DDR4 and PCIe3, bandwidth is the bottleneck. I run a 32B model quantized in EXL3 to fully fit into my 24GB GPU, it runs monstrously fast at 4.5bpw compared to the GGUF version.You can set to load only the n most frequently used layers into GPU, and keep the rest in system memory. It’s really quite performant.
You could run QwQ for a chatbot, it’s not as good as a full-sized model like GPT-o4 or DeepSeek R1, but it is remarkably good for what it is. You’ll want a 24GB GPU for that. For images you can use Flux, it’s very good and runs on an 8GB GPU. Music is trickier, I don’t think there’s really anything you can run at home that’s usable, you may need to just suck it up and pay for Suno there.
I really appreciate this. It sounds like you guys/gals/'gals' know your stuff. I'm a little bit of a retard when it comes to modern tech. Would these programs be available to download on Pirate bay and run on a less-than-average PC? I don't mind the wait, my first computer as a young-un was a commodore and the tape loading was nothing that bothered me. Neither did the 30 minute downloads for music files back in the limewire+56kbps internet days.It's also possible to run models like QwQ without a fancy GPU but for it to run faster than a snail you would need as many CPU threads you could get, and enough RAM for the model. QwQ is 32 billion parameters, which is roughly 32 GB RAM so 48 GB should cover it well. If you can't afford it, there are smaller models out there, such as llama and rocinante. To run a model CPU only, it needs to be gguf format. Programs like llama.cpp and kobold.cpp can run this format.
It's not really piracy since the model is available publicly at QWQ 32B or its GGUF form from their own release page. To run it, you use something like KoboldCPP on the GGUF file, there are others but KoboldCPP is easiest to get into since its almost just drag and drop (I'm assuming you use Windows). KoboldCPP then starts an internal web server and gives you a UI through a web browser. These models don't really "search" the internet, it just spits out whatever it was pretrained on based on the question "prompt" you put in.I really appreciate this. It sounds like you guys/gals/'gals' know your stuff. I'm a little bit of a retard when it comes to modern tech. Would these programs be available to download on Pirate bay and run on a less-than-average PC? I don't mind the wait, my first computer as a young-un was a commodore and the tape loading was nothing that bothered me. Neither did the 30 minute downloads for music files back in the limewire+56kbps internet days.
If I can chuck a question/query into the (search?) box and let it do its' thing while I do other tasks, that's fine for me.
The good thing about most AI models, is that they're hosted openly without downloading torrents from PB or the like. A quick search on HuggingFace with the model you want to use is enough. You have to download the hefty files, which depending on the model size, they can reach several gigs in size.I really appreciate this. It sounds like you guys/gals/'gals' know your stuff. I'm a little bit of a retard when it comes to modern tech. Would these programs be available to download on Pirate bay and run on a less-than-average PC? I don't mind the wait, my first computer as a young-un was a commodore and the tape loading was nothing that bothered me. Neither did the 30 minute downloads for music files back in the limewire+56kbps internet days.
If I can chuck a question/query into the (search?) box and let it do its' thing while I do other tasks, that's fine for me.