Stable Diffusion, NovelAI, Machine Learning Art - AI art generation discussion and image dump

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Fwiw, if anyone else has a 30xx or 40xx series card:


Grab whatever version matches your system, open it, find /bin/ in the archive, extract all the DLLs there to /stable-diffusion-webui/venv/lib/site-packages/torch/lib/ replacing the ones that were there already.

Enjoy your absolutely absurd speedup. On a 4070ti I'm getting 1080p 70-steps generations in 0:49 and standard 512x512 30-steps in 0:02. For comparison those times were around 1:35 and 0:06 before.

E: This seems to have issues with some very large custom merged models (eg: KompromatDiffusion). YMMV. Reverting is just a matter of zapping /site-packages/torch and relaunching or manually reinstalling it via pip.

ETA: Something else that's far more entertaining than it should be: If you just use a static wallpaper on your computer, throw the image into the deepbooru interrogator to get a prompt back for it (or this, which is trickier but more reliable) and just feed it back into img2img with that prompt, batchsize one, high batch count to get a bunch of very very similar but not identical images. Throw all the results into a folder, set it to slideshow shuffle every couple hours.
Any benefits for 2000 series?
 
I'm not sure if the 2000s benefit from it or not, I just got tipped off from one of the discords.

E: Whatever's going on with this I don't like it and I don't know if it's just my machine or the DLLs or what since it's throwing exceptions again after a clean reinstall. I think it must be whatever commits got pulled today - either way I'd suggest not messing with the DLLs or pulling changes until someone figures it out what's going on.

Sucks because it really is a colossal speedup and I'd be over the moon if it applied to training too.
 
Last edited:
But it doesn't output proper corprate_memphis. Not sure if it's the software I'm using that's fucking up. It doesn't maintain the solid soulless forms.

Corridor Crew made a anime with Stable Diffusion

Anime ROCK, PAPER, SCISSORS

"Did we just change animation forever"
1677515096223.png
No. You didn't even make it easier.

Manual Rotoscoping exists.
Digital rotoscoping
Digital Cellshading Conversion
Anime Cellshading.
BlenderToAnime exists
Rig2Anime exists.
Phone Filters exist.

I don't get why people go completely slackjawed by something that we've been doing since the fucking 70s. The only thing that changes is the ease of use.

Now, don't get me wrong. I'm not hating on the guys. I greatly appreciate their experimentation, but after browsing the stable diffusion board, I'm tired of seeing Video to Anime dancing shit that looks like my 2nd year animation project from 2004.
 
Last edited by a moderator:
ANYWAY Let me stop being a contrary faggot and get back on topic.

My LORA is fucked, I need to re-read the guide and not be a retard. I fucked up somewhere. Might as well learn python while I'm at it.

I've been fucking around with the Corporate Memphis generator and I find that it doesn't produce anything noteworthy. By noteworthy, I mean something edgy - I'm being lazy and I hate drawing in that style, so why the fuck not have an ai generate it for me.
So! I've been tinkering around and found that merging it with other models and being precise allows it to work properly.

I'm trying to get gore/ guillotines and other fun things but that will probably require another merge on my part. The checkpoint editor is a real pain the ass, but it's a bullet I've got to bite if I want to properly make some true souless shitposts.

First the control images. Like any good experiement, you need to make sure your prompt actually generates something REMOTELY similar to what you want.
Generated in 1.5 standard and climate change, a funky anime porn checkpoint from 4chan.

I used the same VAE on all the images, so a pox on me. I also found that orthographic view is a VERY powerful prompt in this quest.

2023-02-27-13-52-38-3-sd-v1-5-fp16.png
2023-02-27-13-50-24-3-ClimateChangeFixed_c.png
2023-02-27-13-52-51-4-sd-v1-5-fp16.png
2023-02-27-13-50-05-2-ClimateChangeFixed_c.png


Actual Corporate Memphis from Hugging Face

2023-02-27-13-42-15-4-Corporate_MemphisCor.png
2023-02-27-13-41-50-2-Corporate_MemphisCor.png




Modified Corporate Memphis

2023-02-27-13-40-30-3-Corporate_MemphisCor.png
2023-02-27-13-40-05-1-Corporate_MemphisCor.png
2023-02-27-13-40-18-2-Corporate_MemphisCor.png
 
Making images at higher resolutions really brings out the oddities and flaws in the outputs, though it's pretty fun to break out the image editor and fix stuff. I went back to an older image I liked for a test run, and the problem is I keep finding stuff to fix when at some point I should probably just stop and say it's good enough.
WinterFox take2.jpg
 
Found this on Twitter lately. Apparently, AI animation has massively improved. Last time I saw AI animate something it was awful, but now? Holy shit.


So I decided to find the person who made this, and this is his YouTube channel. I looked at his most recent video and...
(WARNING: FLASHING LIGHTS)

Did you notice that compared to his other works the clothes don't change anymore and how infrequent the face changes? It only took 3 months for the AI to improve, damn.
 
Found this on Twitter lately. Apparently, AI animation has massively improved. Last time I saw AI animate something it was awful, but now? Holy shit.
View attachment 4720098

So I decided to find the person who made this, and this is his YouTube channel. I looked at his most recent video and...
(WARNING: FLASHING LIGHTS)
View attachment 4720102
Did you notice that compared to his other works the clothes don't change anymore and how infrequent the face changes? It only took 3 months for the AI to improve, damn.
Controlnet is a very powerful tool but people also figured out how to get this shit to rotoscope over actual video. I like how this is turning out as well. Hopefully this can get more cooler before it starts getting cucked by government regulations instead of fag's self censoring like it's Reddit.

 
Found this on Twitter lately. Apparently, AI animation has massively improved. Last time I saw AI animate something it was awful, but now? Holy shit.
Did you notice that compared to his other works the clothes don't change anymore and how infrequent the face changes? It only took 3 months for the AI to improve, damn.
I haven't been following the animation attempts too closely but my first thought is it's less the software improving and more people realizing how to use these new tools effectively and a good workflow to get results. Everything new takes time for people to learn, and when you've got a large number of people across different internet communities experimenting, sharing their results and how they accomplished it those who put in the most time and effort can learn rather quickly.
 
  • Agree
Reactions: A Hot Potato
So I'm still on Stable Diffusion 1.4, and for some reason my txt2img is pulling from a non-existent directory in my temp files. Anyone know what's going on? Maybe this is a sign I should switch to 1.5...
 
I'm realizing that I am absolutely shit at coming up with prompts. Half the time I end up with weird things with weird anatomy.
I wasn't very good at it either. If I get serious, I plan to check out sites like https://www.krea.ai/ and other compilations of those specific keywords certain models really like. Oh yeah, "trending on artstation" and "8k". Bullshit like that should not be needed in the long run, hopefully.

Making a crude drawing, collage, photo, or snapshot of a 3D model could help, for img2img. Why leave things to chance?

But allow me to rip off someone's prompt from Krea.ai, put it in Dezgo, and see how good it looks.

corgi puppy floating in outer space, hyperrealistic, very lifelike, adorable, highly detailed digital painting, illustrated by greg rutkowski, artgerm, moebius, simon stalenhag, trending on artstation, 8 k
Epic Diffusion 1.1, Sneed: 3823911248, with a default list of negative prompts:
corgi_puppy_floating_in_outer_space_hyperrealistic_very_lifelike_adorable_highly_detailed_digi...png
It's not floating in outer space, but it looks better than the one I stole from and I only generated one image. I've never heard of Epic Diffusion. Do nothing for a few months, the AI gets twice as good.
 
Oh yeah, "trending on artstation" and "8k". Bullshit like that should not be needed in the long run, hopefully.
i really don't know where prompts like that came from initially but they're basically copium imo. technically they do something but then so does every bit of text that you put into the prompt box, so the question is are they doing what you think they are? luckily there's an easy way to test this:
1.jpg
2.jpg

differences? yes. differences in quality? debatable, and because every prompt you use impacts the strength of the rest is it really worth adding these?
if you're ever unsure of what a prompt is doing try a prompt matrix or x/y/z plot to see the actual differences. unfortunately it doesn't look like that website you're using has options for these but you could still do it manually by adding and removing prompts.
 
I'm realizing that I am absolutely shit at coming up with prompts. Half the time I end up with weird things with weird anatomy.
I think there's a list of negative prompts to avoid things like that. Stuff like negative prompts for "extra limbs" or "extra fingers" or "long neck" or whatever is happening that you want to stop. It has to be a prompt that it recognizes, though, so check out a list.
 
Back