Stable Diffusion, NovelAI, Machine Learning Art - AI art generation discussion and image dump

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
The AI arms race begins!

Glaze, a nonprofit masking program for art uploaded online, has been released: http://glaze.cs.uchicago.edu

18B17891-AB7C-48FC-83B2-827B0C4426EC.png
 
Had a lot of fun with this a few months ago and went back to check on how it’s going… what’s a LORA? Seems my embeddings are obsolete now, I guess :lit:
I recently discovered these things myself when I updated to SD 1.5, but they're what are known as Low-Rank Adapation for Fast Text to Image Diffusion Fine-Tuning, and a guide to them can be found here ( Archive ). They're basically like add-ons to checkpoints (what we refer to generally as models) to guide the AI into producing images that are trained by the LoRA (which are smaller by many magnitudes compared to checkpoints). I haven't really used them, but I believe there're certain strings you have to input into the prompt to get it to work from what I can tell.
 
I don't know if it is the right thread but what is the best way to upscale a picture with real people, like a screenshot of a show or tv program?
I have this screenshot from a video that isn't in HD, I tried to upscale it via Krita vith a plug-in. The result is good but it's still a bit blurry.

Can you be fat and healthy [v6mMpE8AaA0]-00.05.11-7787.jpg Can you be fat and healthy [v6mMpE8AaA0]-00.05.11-7787-upscaled-krita.jpg
 
I don't know if it is the right thread but what is the best way to upscale a picture with real people, like a screenshot of a show or tv program?
I have this screenshot from a video that isn't in HD, I tried to upscale it via Krita vith a plug-in. The result is good but it's still a bit blurry.

If you're looking for a standalone upscaler, I've had some decent results with Waifu2x. There's also an executable you can download at Github ( Archive ).
 
What is the AI that does hentai? I searched it and couldn't find it.



I want to generate AI anime tits.
I believe you might be thinking of some version of WaifuDiffusion, but CivitAI hosts an index of checkpoints for users to download, some of which are designed to generate all the titties you could ask for.
 
What is the AI that does hentai? I searched it and couldn't find it.

I want to generate AI anime tits.
Honestly, it's far harder to find an anime model that can't do hentai/titties and much to my surprise there are models that seemingly can't generate a woman without baring her chest.
 
I recently discovered these things myself when I updated to SD 1.5, but they're what are known as Low-Rank Adapation for Fast Text to Image Diffusion Fine-Tuning, and a guide to them can be found here ( Archive ). They're basically like add-ons to checkpoints (what we refer to generally as models) to guide the AI into producing images that are trained by the LoRA (which are smaller by many magnitudes compared to checkpoints). I haven't really used them, but I believe there're certain strings you have to input into the prompt to get it to work from what I can tell.
Not always, you can also just put <lora:etc...:weight> in your prompt. The strings are usually dependent on how good/generic the LoRA is, or how specific you want it to be. The more flexible LoRAs really stand out by virtue of basically working with anything, regardless of what you throw at it. We're talking about style or character LoRAs that don't create any noise related to your prompt.
What is the AI that does hentai? I searched it and couldn't find it.

I want to generate AI anime tits.
AOM or AnythingV2-3 seems to be the go to.
 
Last edited:
Has anyone tried using Stable Diffusion to colorise a black-and-white image prompt with any success?
Until you brought it up, nope. That said...

Initial thoughts: Yes, you can. If you want to keep as much of the original image as possible, definitely need to use a low denoising strength which definitely hinders the colorization even if you massively increase the step size to compensate (and that's with the original "black and white" prompt in negative prompt with heavy emphasis and "colored" added to prompt with heavy emphasis).

Using variable denoising strength, same step size (40):
Denoise Test1.jpg
Denoising, step size 80:
Denoise Test2.jpg
Denoising, step size 120:
Denoise Test3.jpg
So, you can do it though you're going to lose some details/features of the original image. You may be able to run an image through with a low denoising strength at high steps, then run the output through again, and again and again but I don't really feel like testing that out for you to see if that'll work.

Let's fuck with the CFG scale next. Same prompts, constant denoising of .4.
CFG Test1.jpg
It's a lot more subtle but it is colorizing the image slightly more. Let's throw the lever.
CFG Test2.jpg
So it works, though the colors are a lot more subtle than when using a higher denoising strength. There's probably a happy medium somewhere between the two, or it might work better when specifying colors for things in the prompt which I did not do (just "colored"). But is it possible?
Lolipop Hair.jpg
Definitely.
 
Back