Stable Diffusion, NovelAI, Machine Learning Art - AI art generation discussion and image dump

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Like stated above, Img2Img is where you're going to want to focus your efforts. I can guarantee that there were almost certainly zero images of people riding T-Rexs used in the training data of whatever model you're using, so good luck getting a person in a reasonable place on the dinosaur using text prompts. If you have a rough image to work from already that's a better starting point.
Just use Dino Riders 😁
740full-dino-riders-screenshot.jpgquestar.jpgdinoriders_03.jpgdinoriders_06.jpg
 
It just occurred to me how the "creatives" can still be relevant in the new AI art renaissance - simply "commission" them to make all the concepts that fall between the cracks, like kiwis and people riding dinosaurs and so on. Tell them to go nuts - with the more elaborate and fanciful the better; then use that as the model for future AI pieces.
 
It just occurred to me how the "creatives" can still be relevant in the new AI art renaissance - simply "commission" them to make all the concepts that fall between the cracks, like kiwis and people riding dinosaurs and so on. Tell them to go nuts - with the more elaborate and fanciful the better; then use that as the model for future AI pieces.
Most of the people complaining can't do that because they only know how to do one thing in exactly the same style every time.
 
I don't get what to do exactly after getting the git hub program (let alone what options I should or shouldn't have chosen prior to installation). Where exactly is the program and where and when do I get to set the parameters I want (or don't want)?
 
I don't get what to do exactly after getting the git hub program (let alone what options I should or shouldn't have chosen prior to installation). Where exactly is the program and where and when do I get to set the parameters I want (or don't want)?
Assuming you've followed the guide in the OP and installed all needed prerequisites along with a model, to start the program you open "webui.bat". Then you wait for first time setup to complete which will take somewhere between 5-20 minutes depending on your system (you do meet the system requirements, right?). You'll get a console window not unlike this:
console.PNG
When you see the "Running on local URL" line, it's ready. If you fail to get that far or get an error message that breaks the process, follow the instruction manual and complete whatever step you skipped.

Go to your preferred web browser and either enter the address or "localhost:7860". You'll see this:
WebUI.PNG
At that point you're good to go. Top box is prompts, bottom box is negative prompts and the rest is relatively self-explanatory if you've read anything at all about Stable Diffusion.
 
Last edited:
I don't get what to do exactly after getting the git hub program (let alone what options I should or shouldn't have chosen prior to installation). Where exactly is the program and where and when do I get to set the parameters I want (or don't want)?
To echo @Catgirls are Love make sure you're following this guide and to not skip any steps. Only irk I encountered was ensuring I had Python 3.10.6 installed since some of the associated libraries are not updated to work on more recent releases so double check you have the correct version on your system.

Waifu Diffusion v1.4 came out today, tried using it a bit but I seem to be getting less than stellar results with it in automatic's UI.

Anyone else doing alright with it?
From the playing around I did still works good for me and turns out what I expect. Especially nice are how consistent the faces are now, particularly if you use Euler A and lower the CFG to 7-11. Part of it might just be me treating it as a numbers game and going full hog on 8 image batches when I get a prompt & parameter combination I like though:
1672617489593.png
 
To echo @Catgirls are Love make sure you're following this guide and to not skip any steps. Only irk I encountered was ensuring I had Python 3.10.6 installed since some of the associated libraries are not updated to work on more recent releases so double check you have the correct version on your system.


From the playing around I did still works good for me and turns out what I expect. Especially nice are how consistent the faces are now, particularly if you use Euler A and lower the CFG to 7-11. Part of it might just be me treating it as a numbers game and going full hog on 8 image batches when I get a prompt & parameter combination I like though:
View attachment 4174690
That did improve things a bit, but not dramatically.

Might it possibly have something to do with the fact I've exclusively used Automatic with Novel AI, and only just switched to a different model for the first time?
 
Waifu Diffusion v1.4 came out today, tried using it a bit but I seem to be getting less than stellar results with it in automatic's UI.

Anyone else doing alright with it?
It's only the first epoch so i'd expect it to improve. It is far better at actually following prompts and should be a good base for dreambooths if you have the vram(13.5GiB as of now). Overall i think it was a bit overhyped and anythingv3 clearly outshines it.
 
That did improve things a bit, but not dramatically.

Might it possibly have something to do with the fact I've exclusively used Automatic with Novel AI, and only just switched to a different model for the first time?
Prompts that work well for NovelAI don't necessarily play nice with other models so some experimentation may be required. For example, "catgirl" is generally handled well in NovelAI but with other models it goes more furry/blends a girl with a cat/has a girl holding a cat thus "cat ears" is best used instead.

Also, keep in mind NovelAI's model uses a different Clip Skip setting than most models, so maybe double check and make sure that reset when you changed models. Hopefully I'll have time to play with the new Waifu model tonight, but I also just downloaded several other models I want to play with...where does all of my spare time go.
 
  • Informative
Reactions: A Hot Potato
Didn't see this linked yet, figured it at least warranted a post because it sparked a lot of discussion in the comments (and as I type this, a ratio of 9k likes to 4k dislikes):
Oh yeah, I've seen this linked in the DefendingAIArt subreddit. Lmao at the dislikes, people don't wanna hear the truth because it means they're both wrong and they have to admit they feel threatened by it. It's just so pathetic at this point.

Another based /g/ post about AI art vs Modern Art:
1672434970473673.png
 
Back