Stable Diffusion, NovelAI, Machine Learning Art - AI art generation discussion and image dump

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Can I update it with a git pull or do I have to download it directly?
I haven't tried a git pull, so I can't really say for certain. I'd just download the v1-5-pruned-emaonly.ckpt file directly or from the torrent, that should be all you need.
 
Can I update it with a git pull or do I have to download it directly?
I believe it's in new repos so I doubt you'd have it configured. If you gitted webUI then git pull is only updating the UI; you'll always have to source models yourself. Git pull every day anyway because it's updated very frequently.

For S-D 1.5 (pruned EMA-only which is probably the one you want):
https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt (this'll ask you to log in/agree to terms but it's the file you want so the link'll work the second time.)
Alternatively: magnet link but huggingface's server was faster for me.

For the S-D 1.5 inpainting model:
https://huggingface.co/runwayml/stable-diffusion-inpainting/resolve/main/sd-v1-5-inpainting.ckpt or magnet.

Edit: ok semi-redundant post (besides inpainting model links) but site's chugging so I couldn't see the reply chain.
 
I believe it's in new repos so I doubt you'd have it configured. If you gitted webUI then git pull is only updating the UI; you'll always have to source models yourself. Git pull every day anyway because it's updated very frequently.

For S-D 1.5 (pruned EMA-only which is probably the one you want):
https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt (this'll ask you to log in/agree to terms but it's the file you want so the link'll work the second time.)
Alternatively: magnet link but huggingface's server was faster for me.

For the S-D 1.5 inpainting model:
https://huggingface.co/runwayml/stable-diffusion-inpainting/resolve/main/sd-v1-5-inpainting.ckpt or magnet.

Edit: ok semi-redundant post (besides inpainting model links) but site's chugging so I couldn't see the reply chain.
Good to know; guess I'll delete 1.4 in a few days...
 
Good to know; guess I'll delete 1.4 in a few days...
Maybe don't. I just did because all the comparisons I'm seeing are cherry-picked waifu shit, but then I re-ran a test I did on sdv14 yesterday and got dogshit. Possibly because I'd nigger rigged it with a stack of prompts to force a result I wanted so maybe it'd be fine with more tailored prompts but I'm probably gonna re-download 1.4 just to have on hand.

Yesterdays test on 1.4:
1a.jpg2a.jpg3a.jpg
Exact same settings/seeds on 1.5:
1b.jpg2b.jpg3b.jpg
 
Maybe don't. I just did because all the comparisons I'm seeing are cherry-picked waifu shit, but then I re-ran a test I did on sdv14 yesterday and got dogshit. Possibly because I'd nigger rigged it with a stack of prompts to force a result I wanted so maybe it'd be fine with more tailored prompts but I'm probably gonna re-download 1.4 just to have on hand.

Yesterdays test on 1.4:
View attachment 3753899View attachment 3753901View attachment 3753902
Exact same settings/seeds on 1.5:
View attachment 3753900View attachment 3753903View attachment 3753909
Gotta also use the same seed.

I do like the Hentai Mars though. It's gross.
 
Gotta also use the same seed.
Exact same settings/seeds on 1.5

And I also ran the original batch job that produced those seeds again and got nothing but dirt.
This could quite possibly be an artefact of an improvement in prompt attention and maybe the results would be better if I started from scratch, I'm just saying maybe hang onto your 1.4 checkpoints for compatibility or comparison.

Edit: also, a tip: for easy comparisons use X/Y plots with model checkpoints in the Y slot (important because otherwise it'll reload models every gen if you use the X slot) using a comma-separated list of chkpt filenames. X can be 'nothing' or whatever you like (some CFG strengths is probably a good combo). Leave "use -1" unchecked and your plot will use the same seed even if you have a random -1 in the main seed field.
 
Last edited:
And I also ran the original batch job that produced those seeds again and got nothing but dirt.
This could quite possibly be an artefact of an improvement in prompt attention and maybe the results would be better if I started from scratch, I'm just saying maybe hang onto your 1.4 checkpoints for compatibility or comparison.
That is quite likely. Technology and algorithms usually only get better, not worse.
But I think it would be cool to keep all the large revisions of the data around, just because in a year or so it would be fun to compare the different releases and how much they have changed.
 
Technology and algorithms usually only get better, not worse.
It's just a model weights checkpoint so it's conceivable that they've tuned it for people since that's what's getting attention at the moment and neglected all the other cool shit 1.4 could do. This is plausible because 100% of the comparisons I've seen so far are people-focused, and it certainly seems slightly better for that. It's also possible that it's a pure improvement and this is just a case of one of the prompt terms I was using in an abstract sense having more robust specific training that overrides everything else, but we need more data to be sure--and I don't feel like doing any more testing of my own right now (my point was just that the old one maybe isn't obsolete).
 
I think removing databases is not what you should do with this kind of technology you will need a lot of hard drive space to store as much images. I can see power users having 10+ terabytes of storage in the long run.
 
  • Like
Reactions: Vecr
Has anyone had much success with outpainting to fill in the rest of a subject's body if it's cut off?

I also found the face restorer tool (there's an option to "Restore faces" in the img2img tab) to be useful if you want to scrub low quality faces of people and make them look nicer (though the AI sometimes changes the expression a tad). Use a very low denoising strength when you do. Compare:

before1.jpg after1.png

before2.jpg after2.png

Face restoration also recognises grain and removes it too.

Lucasbaker.jpg 00153-3859059764-best quality,___.png
 
Last edited:
We're FUCKED lmao.
Being an artist is gonna be so hard now
Espeically on the internet.

Maybe this is a weird wake up call to diffrenciate your artstyle or mix it up and also;
Don't be a dick to an artist you know going "HAHA LOL AI IS BETTER!!11".
cut ya local artist some slack. They need it :)

Depends on the artist, tracers, grifters and shit tier artist deserve such treatment if they give you such treatment.

Time to put standards to work this will cut down profits of tracers and do what they fear most streaming.

This forces art community to evolve and put fear of AI of in shit tier artists.
 
Last edited:
Depends on the artist, tracers, grifters and shit tier artist depends such treatment if they give you such treatment.

Time to put standards to work this will cut down profits of tracers and do what they fear most streaming.

This forces art community to evolve and put fear of AI of in shit tier artists.
I am kinda sick of the generic anime art style no.42069
We'll see how this plays out.
Animation will mostly stay due to the stress and team work that it would take
 
I think removing databases is not what you should do with this kind of technology you will need a lot of hard drive space to store as much images. I can see power users having 10+ terabytes of storage in the long run.
Several storage tub's full of smelly hardrives are getting dug out of the attic this instant

kek, chooms.png
 
Back