Stable Diffusion, NovelAI, Machine Learning Art - AI art generation discussion and image dump

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
stuff (3).png

stuff (2).png

stuff (4).png

stuff (5).png
 

Attachments

  • stuff (5).png
    stuff (5).png
    2.1 MB · Views: 41
Had to take a break from Stable Diffusion for a month, partly due to jumping between ElevenAI and other TTS models, come back and all my old training methods are basically obsolete lmao.

On the upside, the layout for webui is looking a lot cleaner, and it seems that Dreambooth is a lot more versatile in its training capabilities. Really looking forward to seeing the improvements.

Edit: seems to be working again
E0DFB17E-591A-499E-9242-D75DC25FFD55.png
 
Last edited:
One thing I have noticed is that the newer models spoken about as being high quality are total dogshit at depth perception.

Like you ever watched a movie and notice the green screen border between the set's floor and the screen itself? That's the weird shit newer models produce but everything on the "green screen" looks extremely flat. IMO Gold were struck with Stable Diffusion 1.4/1.5 and early derivatives of those two. I mean I guess they're high quality if you only want to generate hentai girls lying on a bed or the girl placed near any other flat surface
efgn.png

I have also gotten into Comfy UI. It is indeed comfy once you figure out how everything works. What I want to figure out how to do is multiple subject workflow. What I would like to do is a Warhammer epic. The lower portion would consist of Ultramarine's killing furries while the upper portion would be a space battle with lots of lasers.
 
Last edited:
Further refining things. Would love to work on this piece some more, but forgot to note the prompt.
00147-690270270.png

I have also gotten into Comfy UI. It is indeed comfy once you figure out how everything works. What I want to figure out how to do is multiple subject workflow. What I would like to do is a Warhammer epic. The lower portion would consist of Ultramarine's killing furries while the upper portion would be a space battle with lots of lasers.
Thought: Generate a pile of dead furries for one layer, one or more furry running towards the center of the image on another, and then the Matt Ward OC on a final layer. I know photoshop has subject selection which could potentially make things easier, and then you can do multiple passes of img2img at low noise density to clean things up.
 
forgot to note the prompt
Photo of cgyl person, beautiful, young, teenager, forest background
Negative prompt: Out of frame, out of focus, blurry
Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 690270270, Size: 512x512, Model hash: bda1755d2a, Model: Quintonhoover_Quintonhoover_48900

it's stored in the png metadata by default
 
So I've got a question, kiwibros.

For those of you that've been using dreambooth, what base models have you been using for your training? For the past 5 or so months I've been almost exclusively using SD 1.5, but I've caught 2 major pitfalls:
  • Non-anime cartoon styles have extremely hit-or-miss results. Most artist models I've trained for shit like MTG/Pokemon artists have turned out decent, but if I want to build a model off of 2008 Penny Arcade, Foster's Home, or He-Man the results turn out cursed as fuck.
  • Nude imagery gets incredibly fucky when using SD-derived models. Since one of my models is used for generating the lineart to watercolors I do in person, I'd like to have the model generate more natural figures so I don't need to roll the dice trying to fix the hips of a character.
I know WaifuDiffusion and NovelAI are both options others go with, but since none of the styles I experiment with are anime-adjacent I'm worried the results will make my issues even worse.
 
Last edited:
I want to get into trying my hand at some of this stuff. How much of a nightmare will it be given I have an AMD GPU? I've done a little playing around and found a website with a guide for using Stable Diffusion with AMD. Followed instructions, was able to run it and get some results. Nothing like people are producing here though. The instructions were quite specific to one particular model and a little old. Just to give me a general impression, what is the state of things doing this on AMD?

And why the Hell aren't AMD the company doing more to facilitate this on their chips? Seems like a major failing on their part the way this is taking off.
 
I want to get into trying my hand at some of this stuff. How much of a nightmare will it be given I have an AMD GPU? I've done a little playing around and found a website with a guide for using Stable Diffusion with AMD. Followed instructions, was able to run it and get some results. Nothing like people are producing here though. The instructions were quite specific to one particular model and a little old. Just to give me a general impression, what is the state of things doing this on AMD?

And why the Hell aren't AMD the company doing more to facilitate this on their chips? Seems like a major failing on their part the way this is taking off.
IIRC Stable Diffusion makes use of CUDA, which is an API/parallel computing platform created and maintained by NVIDIA. I know some sort of interpretation layer/code conversion process exists, but I have absolutely zero knowledge of how it performs.

It's roughly like running Windows software on a Mac. You can, given you've taken the extra steps necessary but it won't be as efficient as just running the program on a Windows machine instead. I would assume someone somewhere is working on versions specifically designed for AMD hardware but I've never gone looking for it as I've got an NVIDIA GPU.
 
IIRC Stable Diffusion makes use of CUDA, which is an API/parallel computing platform created and maintained by NVIDIA. I know some sort of interpretation layer/code conversion process exists, but I have absolutely zero knowledge of how it performs.

It's roughly like running Windows software on a Mac. You can, given you've taken the extra steps necessary but it won't be as efficient as just running the program on a Windows machine instead. I would assume someone somewhere is working on versions specifically designed for AMD hardware but I've never gone looking for it as I've got an NVIDIA GPU.
There is at least one way of converting. As I mentioned in my post I have done it. But it wasn't a universal solution. It was a mildly complex sequence of instructions for a specific model. I'm going to back and see if I can figure out how to use the instructions in a more general way, but I was trying to get a feel for if there was a reasonable proportion of people using AMD cards and how it's done; or if I was in some weird niche dead-end.

It's starting to look like buying an AMD card was a fundamental mistake. It's a very powerful card - 20GB VRAM, current gen. So I'm less worried about some performance hit than I am about my own time in figuring out how the fuck to make it work if I even can.
 
There is at least one way of converting. As I mentioned in my post I have done it. But it wasn't a universal solution. It was a mildly complex sequence of instructions for a specific model. I'm going to back and see if I can figure out how to use the instructions in a more general way, but I was trying to get a feel for if there was a reasonable proportion of people using AMD cards and how it's done; or if I was in some weird niche dead-end.

It's starting to look like buying an AMD card was a fundamental mistake. It's a very powerful card - 20GB VRAM, current gen. So I'm less worried about some performance hit than I am about my own time in figuring out how the fuck to make it work if I even can.
So I feel like there's three main reasons why AMD has been getting fucked in the whole AI race:
  1. When it comes to professional hardware, AMD has been focusing on CPU architecture for the past 5 or so years. If you need enterprise CPUs, AMD is still the way to go. Just don't expect their GPUs to be comparable to Tesla and Quadro cards.
  2. There's close to zero hardware architecture on AMD GPUs optimized for AI, nor are there any APIs for AMD architecture on the same level as Nvidia's CUDA.
  3. Simply put, Nvidia just has the greater market share when compared to AMD. If you're an AI lab looking to get your tech picked up by more users, you're gonna prioritize the more popular setups. It's the same reason why most PC games prioritize Windows over Linux and Mac OS: The time and money required to support secondary platforms just isn't worth it for them.
It genuinely sucks seeing AMD get shafted in the AI race, especially since I have fond experience using their cards in my old mining rig and render machine back in the day, but they shot themselves in the foot by not jumping in sooner. I mean shit, even Apple beat them to the punch in this race, given they've been working on AI hardware acceleration since the A11 chip in 2017.
 
Last edited:
It's starting to look like buying an AMD card was a fundamental mistake. It's a very powerful card - 20GB VRAM, current gen. So I'm less worried about some performance hit than I am about my own time in figuring out how the fuck to make it work if I even can.
I'd say that if you really wanted to get into this stuff and spend a lot of time playing or messing around with it then getting an AMD GPU was probably not the best choice you could have made (both for speed and stability/just getting it working). I know I'll soon be stalking the used GPU market for something with a little more oopmh than my card from 2016.
 
That’s starting to look better. Still a bit too much anime for my liking, though…
B47095A7-432E-4DA0-B1C9-299B50101F81.png


I'd say that if you really wanted to get into this stuff and spend a lot of time playing or messing around with it then getting an AMD GPU was probably not the best choice you could have made (both for speed and stability/just getting it working). I know I'll soon be stalking the used GPU market for something with a little more oopmh than my card from 2016.
100% agree on getting a used Nvidia GPU. I'm planning on using my tax return to snag a second hand V100 for some more high-end oomph.
 
Last edited:
Can I use this to make realistic images like the one of Trump getting arrested? I see Midjourney costs shekels now and also requires >discord so fudge that.
 
Back