Stable Diffusion, NovelAI, Machine Learning Art - AI art generation discussion and image dump

  • 🔧 At about Midnight EST I am going to completely fuck up the site trying to fix something.
RIP ARTCOWS

Maybe the comic artists will stay but everyone else?

Now draw your OC's filling this out :
1666231187835.png
 
Holy shit.
It is fucking OVER for drawcels and digital-artcels.
Cannot wait to see the seethers this will bring out on Twitter in the coming years.

Imagine honing an extremely difficult skill for literally thousands of hours over your life and then some downie retard presses 2 buttons and blows anthing from your life's work out of the water....
 
A little bit of a hint for some of the new people. No idea if it's been covered in the thread yet, but I saw this technique used on the stable diffusion subreddit and it's given me good results.

If you're using the webgui version of Stable Diffusion you can easily use the img2img tool to transform shitty ms paint rough draft templates into a proper piece of art. This way you can sort of "prime" the AI to give you a shortcut for what you want, instead of blindly using the text2img over and over until something suitable pops up. This saves a massive amount of time. I'll post an example below.

TEMPLATE:
template6.png
As you can see, not exactly a masterpiece, but all you really need to do is "set the scene" and the AI will take care of the rest. I recommend setting the size to 512 x 512 in paint or whatever drawing software you use, as that's the size that works best for the AI. You can always upscale later.

FINAL RESULT:
download.png
Fairly absurd what it's done with that derpy fuckin ms paint abomination, eh? This is after about an hour of fucking with the prompt, denoising, sampling method, overall I probably generated 200 iterations in total. Basically what you do is sit there and generate over and over, when you get a slightly better result you click "send to img2img" and restart the process with it, refining the quality over time until you're satisfied. Be prepared to spend some time and effort fucking around with the settings though, it can get frustrating sometimes. I'll post a few of the iterations in the spoiler below to give you an idea of what the evolution looks like.
The very first generation
03464-3770749313-A blue dragon with green eyes and sharp teeth, ((flames)) shoot out of the dr...png
About 30 generations later
03493-1382446121-A blue dragon with green eyes and sharp teeth, ((flames)) shoot out of the dr...png
About 15 generations from the final result
03731-2868471878-flames coming out of a blue dragons mouth, fantasy, in the style of greg rutk...png
Quite the transformation. Hopefully this is useful information for a few of you, and hasn't already been explained better by somebody else earlier in the thread, cause I ain't going back to check lmao. Happy hunting!
 
Link is in the OP, but it's this
https://rentry.org/voldy
And the master test is probably the Auska image the guide uses as an example. Basically if you enter what the writer did and get a highly similar output, it's been installed correctly. If you're getting reasonable outputs I'd say you're fine.
oh wow, it looks like I've got a long way to go. I should probably just start over.
 
I have been playing with this thing all day. Heres a tip ive noticed that can really come in handy-

If you want a specific images, you are better off sketching a rough image and then feeding it to Novelai from img2img and then going from there

For example, i wanted to make a shark flying through the clouds. I sketched out this really rough drawing

rough.png

Then I ran that through img2img with a bunch of keywords for 20 images until i found one that looked halfway to what I wanted,

Then take the better images, refine the keywords a bit, go through 1000 more images and you can end up with something like this
Shark in the sky.png
 
Haven't set this up myself because my GPU is shit, but I saw somebody posting results from Novel AI prompt "Sneed's feed and seed" returning some sort of feed/seed shop. Apparently, 1 in 20 results or so came back with this adorable creature, presumably what Novel AI thinks a "sneed" is.

What a time to be alive.
 

Attachments

  • sneed1.png
    sneed1.png
    421.2 KB · Views: 116
  • sneed2.png
    sneed2.png
    626.1 KB · Views: 130
  • sneed3.png
    sneed3.png
    667.3 KB · Views: 145
  • sneed4.png
    sneed4.png
    559.3 KB · Views: 135
If knifes were used to stab people 99% of the time and used for making food only 1% of the time I'm sure more people would argue for banning them.
And thats how you get the nanny state of England because some Muslims and niggers can't be trusted with knives without stabbing people ergo nobody can own knives or say mean words because it might cause said niggers to commit crimes.
PS here is some cool art I am sure you will consider to be CP where I tested out different aspect resolutions.
00350-1389197623-((masterpiece)), (((best quality))), dynamic lighting,  garden, panorama, per...png
00348-708208011-((masterpiece)), (((best quality))), dynamic lighting,  black and white, garde...png
And here is a pretty cool "CP" Mecha I made

00339-1114681348-(((masterpiece))), (((best quality))), ((ultra-detailed)), mech, flaming city...png
 

Attachments

  • 00350-1389197623-((masterpiece)), (((best quality))), dynamic lighting,  garden, panorama, per...png
    00350-1389197623-((masterpiece)), (((best quality))), dynamic lighting, garden, panorama, per...png
    865.6 KB · Views: 48
I'm seeing some really good results with img2img if you want to retouch images with a low denoising strength. I used the CBC unfiltered image of Keffals found in the OP:

1666238061978.png

Here's one with low denoising strength and the prompt chubby (((greasy))) man looking pensively, best quality, masterpiece:
00027-1053282428-chubby (((gre___.png
And here's the same original picture altered with high denoising strength and the prompt man looking pensively, best quality, masterpiece:
00026-4132343025-man looking p___.png
From the original we can also get a serious-looking Keffals:
1666238341861.png

Skin looks way better on the AI-generated images.
 
I don't know what I'm doing wrong, everything I try to make comes out looking like some horrific tzimisce flesh creation.
It could be prompts mate. Try this, you can enter what you want and it'll translate it into something more understandable for the ai, pretend the ai is completely fuckin retarded and you got spell out every itty bitty detail.
I abuse the fuck out of it.
Edit- Don't be afraid to fuck around with the sampling, steps, denoising yadda yadda.
 
Last edited:
I'm gonna say fuck it, don't try people at all unless you're inpainting.
It can do well with people, just make sure you have a stack of negative prompts like: bad anatomy, bad hands, error, missing fingers, extra digit, fewer digits, extra limb, etc. Add or ((emphasise)) more for specific problems you encounter.

If it's a more complex scene then different samplers might be needed; DPM adaptive is really good at sorting out anatomy for example. It's slow but it only needs 10-20 steps to do good work. I've had good luck with DPM2 Kerras for certain styles as well. It's faster but requires more steps than Euler/Euler a to get results. But stick to Euler a unless you really need big guns, it's almost the fastest and pretty much the second best for 90% of shit.
Also careful structuring of prompts is important. Earlier ones get more emphasis and once you go beyond 75 tokens everything is appended with an AND so it can get confused. So you may need to play with CFG levels; I use a lot of X/Y plots to find sweet spots before doing batch jobs.
 
Back