- Joined
- Dec 27, 2023
Is an RTX 4060ti 16GB decent in 2024?Do you have a decent GPU?
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Is an RTX 4060ti 16GB decent in 2024?Do you have a decent GPU?
It's alright for stable diffusion.Is an RTX 4060ti 16GB decent in 2024?
@Agares I'm going to disagree with macrodegenerate here and suggest the Forge UI instead if you don't feel like using extensions. It's more optimised and will take a shorter time to generate images, not to mention virtually no latency if you're switching from one checkpoint to another, as well as quickly changing settings like clip skip or VAEs.It's alright for stable diffusion.
Follow the instructions on this page:
![]()
GitHub - AUTOMATIC1111/stable-diffusion-webui: Stable Diffusion web UI
Stable Diffusion web UI. Contribute to AUTOMATIC1111/stable-diffusion-webui development by creating an account on GitHub.github.com
Others will recommend comfy-ui, but this is the fastest way into the game for noobs.
Download SDXL and throw it into the stable-diffusion directory in models.
Alternatively if you want something with danbooru style tags.
Go to civit.ai and download either PonyXL or Noobs/Illustrious.
Edit:ninjaed.
It's also far easier to setup than Auto1111 which is really the best for anyone not entirely comfortable with git (ignoring that Auto1111 isn't being updated anymore).I'm going to disagree with macrodegenerate here and suggest the Forge UI instead if you don't feel like using extensions. It's more optimised and will take a shorter time to generate images, not to mention virtually no latency if you're switching from one checkpoint to another, as well as quickly changing settings like clip skip or VAEs.
There's a Forge extension (though I haven't tried it yet) called Forge Couple that comes pretty close to AUTO1111's regional prompting.The issue with forge is that there are several plugins which aren't supported that are really important. For example, regional prompting isn't supported as a plugin. That makes running the llama.cpp regional prompter impossible. That's a problem because as we move into the Flux era we are going to need regional separation. Even if flux is good at separating concepts.
It depends what your expectations are. You've been able to use DirectML for Stable Diffusion on Windows for some time now and if you're just doing SD1.5 or SDXL it's adequate-ish. But really you still need Linux and the thing you need to watch is ROCm support:How's the AMD GPU support? Have there been any breakthroughs or is it still a big hit and miss on Windows?
Looks like I'm using forge from now on. That extension is great.There's a Forge extension (though I haven't tried it yet) called Forge Couple that comes pretty close to AUTO1111's regional prompting.
Pytorch doesn't support ROCm on Windows yet. The options for Win/AMD are still limited to DirectML, ROCm in WSL, and more niche options like Zluda and ONNX.EDIT: Wait - that says 6.1 is now available for Windows? Okay, that's new since I last checked in. Maybe things have changed. Worth another go then, perhaps.
I've only done a few tests and experiments thus far (with a model I'm new to using) but it looks like it's a pretty good equivalent to Auto1111's regional prompter; it's definitely worth picking up if you're using Forge.There's a Forge extension (though I haven't tried it yet) called Forge Couple that comes pretty close to AUTO1111's regional prompting.
Here's your ZLUDA news, bro:Pytorch doesn't support ROCm on Windows yet. The options for Win/AMD are still limited to DirectML, ROCm in WSL, and more niche options like Zluda and ONNX.
I've used NoobXL a bit as I was trying for PonyXL alternatives that weren't quite as furry, and while I also have illustrious there's an Illustrious mix (NAI-NSFW-illustrious v7) that I prefer and I've gotten really good results from. Only downside is it's a very anime model so if you want some other style it probably won't do it well. My trend of using NSFW-trained models to make non-NSFW stuff seems to be continuing uninterrupted.Illustrious (and to a lesser extent, NoobXL) seems to be a competitor for PonyXL now. Anyone try it out yet?
Were you looking at roughly the same generation times for Noob and Illustrious when compared to Pony?I've used NoobXL a bit as I was trying for PonyXL alternatives that weren't quite as furry, and while I also have illustrious there's an Illustrious mix (NAI-NSFW-illustrious v7) that I prefer and I've gotten really good results from. Only downside is it's a very anime model so if you want some other style it probably won't do it well. My trend of using NSFW-trained models to make non-NSFW stuff seems to be continuing uninterrupted.