AI home discussion and setup - AI here reffers to generative AI models, not true AI. We're not trying to homebrew our own Skynet, feds.

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.

Gender: Xenomorph

Pronouns: Xe/Xer
kiwifarms.net
Joined
Aug 2, 2021
This is the tread for discussing setting up your own genAI because Crayion/ChatGPT are gay corpo bullshit.

I have been trying to set up my own AI with no success. I want to generate high quality images with my machine.

Has anyone done stuff like this? Does anyone know the minimum specs and how to set it up properly? What guides did you guys follow?
 
  • Thunk-Provoking
Reactions: SIGSEGV
"High quality" is subjective, but I have no issues generating images with Stable Diffusion.

Hardware is your main limitation, particularly VRAM. AI image generation uses an enormous amount of it, and the more you have the higher resolution you can get. There are methods to get it working on video cards as low as 2GB VRAM, but they compromise a lot and as the other poster said, slows everything down to an almost unbearable level. The "minimum" most recommend is 8 GB VRAM; that allows you to run everything without any weird compromises or janky workarounds.

Oh, and you need to be running NVIDIA cards. AI image generation is based all around CUDA, which is proprietary to NVIDIA. There are, again, some weird jury rig ways around it, but all the methods I've seen require Linux and just barely work.

Regardless, the link above is a way to just give it a shot. You just install it and access it through your web browser. Nowadays it is pretty simple and just about as easy as installing any other kind of software.
 
  • Like
Reactions: Gender: Xenomorph
What video card do you have?

It's typically not worth running Stable Diffusion locally unless you want to die of old age waiting for things to generate, but there's an automated installer assuming you're running Windows and have an NVIDIA card.
Wow this one has a setup and everything? Amazing. Do you know if it can load .tensor models?
"High quality" is subjective, but I have no issues generating images with Stable Diffusion.

Hardware is your main limitation, particularly VRAM. AI image generation uses an enormous amount of it, and the more you have the higher resolution you can get. There are methods to get it working on video cards as low as 2GB VRAM, but they compromise a lot and as the other poster said, slows everything down to an almost unbearable level. The "minimum" most recommend is 8 GB VRAM; that allows you to run everything without any weird compromises or janky workarounds.

Oh, and you need to be running NVIDIA cards. AI image generation is based all around CUDA, which is proprietary to NVIDIA. There are, again, some weird jury rig ways around it, but all the methods I've seen require Linux and just barely work.

Regardless, the link above is a way to just give it a shot. You just install it and access it through your web browser. Nowadays it is pretty simple and just about as easy as installing any other kind of software.
Thanks! I think I meet those requirements though!
 
There's a ton of tutorials for setting up Stable Diffusion and A1111. Optimally you want something like RTX 3060 to run it all decently, it's the most cost-efficient card for this. Also, it's worth noting that whenever you see a nice AI generated image online, someone made 4000 pictures with the same prompt, discarded 3999 bad ones and presented the only decent one as a masterpiece. Also be prepared for the fact that you will have to download and experiment with hundreds of checkpoints, loras, negative embeddings and lycorises, only to realize that your images are still shit (spoiler alert, even with literally dozens of negative embeddings for better hands you will still get shitty hands 95% of time). You will then start training your own loras and lycorises, only to realize you need to master the regional prompter to make anything decent, only to realize that regional prompter does not really support loras well. It's all so tiresome, but who can fight the dream of generating the perfect waifu one day?
 
There's a ton of tutorials for setting up Stable Diffusion and A1111. Optimally you want something like RTX 3060 to run it all decently, it's the most cost-efficient card for this. Also, it's worth noting that whenever you see a nice AI generated image online, someone made 4000 pictures with the same prompt, discarded 3999 bad ones and presented the only decent one as a masterpiece. Also be prepared for the fact that you will have to download and experiment with hundreds of checkpoints, loras, negative embeddings and lycorises, only to realize that your images are still shit (spoiler alert, even with literally dozens of negative embeddings for better hands you will still get shitty hands 95% of time). You will then start training your own loras and lycorises, only to realize you need to master the regional prompter to make anything decent, only to realize that regional prompter does not really support loras well. It's all so tiresome, but who can fight the dream of generating the perfect waifu one day?
I'm getting there.

1689874809368.png


If you squint hard enough...


I'm actually looking for a good burton copycat

 
Also, it's worth noting that whenever you see a nice AI generated image online, someone made 4000 pictures with the same prompt, discarded 3999 bad ones and presented the only decent one as a masterpiece.
This is pretty much the SD workflow, yes. Once you find somethin good you regenerate it using the same seed and upscale it.

Getting the initial shotgun blast of images and seeing what shit sticks to the wall is why it's useful to have a fast card. I use runpod.io since the community cloud has pretty cheap A40s.
 
128x128. I was running tests to see if the webui was working properly.

I changed the model which made it resemble what I was looking for, but it crashed.
 
128x128. I was running tests to see if the webui was working properly.

I changed the model which made it resemble what I was looking for, but it crashed.
Are you using a .ckpt or a .safetensors model?

IF you are on Linux and IF you are using a .ckpt then the webui will sometimes memory leak when swapping models. This does not occur with .safetensors files, if everything else is set up correctly.

IF on Linux, you may also want to look into the Garbage Collector installed on your machine. The "libtcmalloc" GC fixes some memory leaks with PyTorch that occur with the default GC. The issue is so common the Webui will even warn you if that GC isn't present when launching on Linux.

I've not used the webui on Windows, so if that is your OS I have no experience in the matter.
 
Last edited:
Are you using a .ckpt or a .safetensors model?

IF you are on Linux and IF you are using a .ckpt then the webui will sometimes memory leak when swapping models. This does not occur with .safetensors files, if everything else is set up correctly.

IF on Linux, you may also want to look into the Garbage Collector installed on your machine. The "libtcmalloc" GC fixes some memory leaks with PyTorch that occur with the default GC. The issue is so common the Webui will even warn you if that GC isn't present when launching on Linux.

I've not used the webui on Windows, so if that is your OS I have no experience in the matter.
I think it was not properly setup.

I freshly installed everything and now I seem to get expected results:
1689981339656.png
 
Any textbots that can't get lobotomized yet?
 
Call me a faggot but would it not be funny as fuck to buy a server, get some sort of chat AI on it and make it communicate through 100~ twitter accounts spouting right wing ragebait (such as ukranians need to surrender if it saves just one life/etc) just to fuck with twitter niggers?

The potential for a-loggery is outstanding...
 
Back