Piracy General

Why have I been doing it for decades? Because I face no consequences for it.
even if i faced consequences from multi billion dollar companies ordering the goberment to annoy me that would just force me to learn how to cover my tracks better.
being a cuck is something i have no interest or will in becoming.
also moral is relative, not absolute, therefore irrelevant.
 
View attachment 7633720
rip to nsw2u, nswdl. game-2u, bigngame, ps4pkg.com, mgnetu.com and many more
Jokes on them, 90% of modern games suck anyway.
Mashle - CH.120 - Lance Brown Copium Bag.webp
 
Are there any AI softwares that I can download and pirate/use infinitely without paying? I want to use AI but I don't want to pay for it until I get some use out of it and I can generate some dosh.
.... tons?
Right now I'm running one in the background to process some music files
It would help if you got a specific idea for what you're looking for
 
.... tons?
Right now I'm running one in the background to process some music files
It would help if you got a specific idea for what you're looking for
I appreciate the reply.

I would like one for writing, one for image generation and one for music generation, but the music one is more of a frittle of a fancy than anything else.
I would like to create some books with the writing AI, using the image software to make the illustrations for the book.
 
I appreciate the reply.

I would like one for writing, one for image generation and one for music generation, but the music one is more of a frittle of a fancy than anything else.
I would like to create some books with the writing AI, using the image software to make the illustrations for the book.
You could run QwQ for a chatbot, it’s not as good as a full-sized model like GPT-o4 or DeepSeek R1, but it is remarkably good for what it is. You’ll want a 24GB GPU for that. For images you can use Flux, it’s very good and runs on an 8GB GPU. Music is trickier, I don’t think there’s really anything you can run at home that’s usable, you may need to just suck it up and pay for Suno there.
 
I appreciate the reply.

I would like one for writing, one for image generation and one for music generation, but the music one is more of a frittle of a fancy than anything else.
I would like to create some books with the writing AI, using the image software to make the illustrations for the book.
It's also possible to run models like QwQ without a fancy GPU but for it to run faster than a snail you would need as many CPU threads you could get, and enough RAM for the model. QwQ is 32 billion parameters, which is roughly 32 GB RAM so 48 GB should cover it well. If you can't afford it, there are smaller models out there, such as llama and rocinante. To run a model CPU only, it needs to be gguf format. Programs like llama.cpp and kobold.cpp can run this format.
 
It's also possible to run models like QwQ without a fancy GPU but for it to run faster than a snail you would need as many CPU threads you could get, and enough RAM for the model. QwQ is 32 billion parameters, which is roughly 32 GB RAM so 48 GB should cover it well. If you can't afford it, there are smaller models out there, such as llama and rocinante. To run a model CPU only, it needs to be gguf format. Programs like llama.cpp and kobold.cpp can run this format.
You can set to load only the n most frequently used layers into GPU, and keep the rest in system memory. It’s really quite performant.
 
It's also possible to run models like QwQ without a fancy GPU but for it to run faster than a snail you would need as many CPU threads you could get, and enough RAM for the model. QwQ is 32 billion parameters, which is roughly 32 GB RAM so 48 GB should cover it well. If you can't afford it, there are smaller models out there, such as llama and rocinante. To run a model CPU only, it needs to be gguf format. Programs like llama.cpp and kobold.cpp can run this format.
You can set to load only the n most frequently used layers into GPU, and keep the rest in system memory. It’s really quite performant.
good to know about llama, especially since i learned about a project regarding it and a minecraft mod where llama plays on the pc you choose so, yeah... i learned my lesson when trying to run a black desert emulator and running it on the same pc with less than 32GB of ram.
it wasn't a smooth experience...

but the RAM needs for processor are equal to what? since translating 1B parameter to 1GB of ram is kind of finicky when you consider the processor clock speed and ram frequency + version and doing a trial and error while interesting needs money in order to build systems based off a certain range.
like for using 16GB DDR3 intel I3 4130 :sighduck:
 
good to know about llama, especially since i learned about a project regarding it and a minecraft mod where llama plays on the pc you choose so, yeah... i learned my lesson when trying to run a black desert emulator and running it on the same pc with less than 32GB of ram.
it wasn't a smooth experience...

but the RAM needs for processor are equal to what? since translating 1B parameter to 1GB of ram is kind of finicky when you consider the processor clock speed and ram frequency + version and doing a trial and error while interesting needs money in order to build systems based off a certain range.
like for using 16GB DDR3 intel I3 4130 :sighduck:
the memory used also depends on the quantization of the model
 
You can set to load only the n most frequently used layers into GPU, and keep the rest in system memory. It’s really quite performant.
Only if you have newer DDR5 memory, and PCIe4 GPUs. My older system is DDR4 and PCIe3, bandwidth is the bottleneck. I run a 32B model quantized in EXL3 to fully fit into my 24GB GPU, it runs monstrously fast at 4.5bpw compared to the GGUF version.
 
You could run QwQ for a chatbot, it’s not as good as a full-sized model like GPT-o4 or DeepSeek R1, but it is remarkably good for what it is. You’ll want a 24GB GPU for that. For images you can use Flux, it’s very good and runs on an 8GB GPU. Music is trickier, I don’t think there’s really anything you can run at home that’s usable, you may need to just suck it up and pay for Suno there.

It's also possible to run models like QwQ without a fancy GPU but for it to run faster than a snail you would need as many CPU threads you could get, and enough RAM for the model. QwQ is 32 billion parameters, which is roughly 32 GB RAM so 48 GB should cover it well. If you can't afford it, there are smaller models out there, such as llama and rocinante. To run a model CPU only, it needs to be gguf format. Programs like llama.cpp and kobold.cpp can run this format.
I really appreciate this. It sounds like you guys/gals/'gals' know your stuff. I'm a little bit of a retard when it comes to modern tech. Would these programs be available to download on Pirate bay and run on a less-than-average PC? I don't mind the wait, my first computer as a young-un was a commodore and the tape loading was nothing that bothered me. Neither did the 30 minute downloads for music files back in the limewire+56kbps internet days.

If I can chuck a question/query into the (search?) box and let it do its' thing while I do other tasks, that's fine for me.
 
I really appreciate this. It sounds like you guys/gals/'gals' know your stuff. I'm a little bit of a retard when it comes to modern tech. Would these programs be available to download on Pirate bay and run on a less-than-average PC? I don't mind the wait, my first computer as a young-un was a commodore and the tape loading was nothing that bothered me. Neither did the 30 minute downloads for music files back in the limewire+56kbps internet days.

If I can chuck a question/query into the (search?) box and let it do its' thing while I do other tasks, that's fine for me.
It's not really piracy since the model is available publicly at QWQ 32B or its GGUF form from their own release page. To run it, you use something like KoboldCPP on the GGUF file, there are others but KoboldCPP is easiest to get into since its almost just drag and drop (I'm assuming you use Windows). KoboldCPP then starts an internal web server and gives you a UI through a web browser. These models don't really "search" the internet, it just spits out whatever it was pretrained on based on the question "prompt" you put in.
It is possible to add backend tooling to allow it to search, but that's more of the function of how the program you are using processes the prompt for the model to handle the text.
 
I really appreciate this. It sounds like you guys/gals/'gals' know your stuff. I'm a little bit of a retard when it comes to modern tech. Would these programs be available to download on Pirate bay and run on a less-than-average PC? I don't mind the wait, my first computer as a young-un was a commodore and the tape loading was nothing that bothered me. Neither did the 30 minute downloads for music files back in the limewire+56kbps internet days.

If I can chuck a question/query into the (search?) box and let it do its' thing while I do other tasks, that's fine for me.
The good thing about most AI models, is that they're hosted openly without downloading torrents from PB or the like. A quick search on HuggingFace with the model you want to use is enough. You have to download the hefty files, which depending on the model size, they can reach several gigs in size.

However, self-hosting AI models is not like installing most software, you need two things to use one: A backend and a frontend.

A backend is like a virtual server that reads the LLM file and boots the AI model into your machine. A frontend is the GUI where you interact with the model, think it like opening Grok's website but on your computer.

A good, beginner-friendly backend is KoboldCPP, which is already prebuilt into a windows executable, ready to be booted up. From there, you can tweak with the running model as much as you want.

Oogabooga's GUI (yes, that's the name of the creator of the interface) acting as a frontend for your image and text generators is a good choice, but it has more options to fiddle with than a space shuttle. SillyTavern's front end is better suited for beginners, and there is a lot of tutorials floating around that dwells deeper into setting up everything.

I was in your same position, but turns out that my shitty laptop with integrated AMD graphics and 8GB of RAM can't handle anything without crashing. 16 GB RAM with at least 1GB of VRAM is needed to at least run LLMs locally, so be aware of that.
 
Personally, I use OpenWebUI. Setup is outrageously simple, you just grab a docker image and tell it to start the container automatically on boot. You can download new models easily through the client interface (your web browser). This does require you to be familiar with Linux, though.
 
Back