I am going to talk about DreamBooth using
@Starved Artist 's example of women's pantsuits just because it is a useful tool if you need a single thing done particularly well.
Here is a comprehensive walkthrough of how to use DreamBooth
This is Byrme, a tool for bulk cropping images to 512x512
A list of all artists StableDiffusion Recognizes (this is because I'm going to talk about teaching your AI new styles a little at the end of this)
A place for discussing prompts
A Hugging Face project that will randomly spruce up your prompt with extra prompts
Okay, those are all the cool links. So, we started off training NovelAI on DreamBooth. We're no longer doing that because of things I'm about to discuss, but I also feel DreamBooth has its uses.
Basically, you gather up all your images and upload them to the cloud. They have to be organized into folders. These folders become the prompts that will be added for the images in the folder. So if you want women's pants suits, you'd name the folder 'Women's Pants Suits' and after the training was finished you would have a module that could render baller women's pants suits. I say a module that does one thing because there are some problems inherent in trying to teach it a bunch of things at once. To make this easier I'm going to explain what it does poorly in a 'cons' section and what it does well in a 'pros' section and beyond that good luck.
Pros:
With a decent collection of images and adequate time to train on them it will in fact create a module that is really good at what it's been trained to do. It's great if you want to add something extremely specific like putting your face on other people's bodies, or getting the AI to understand what women's pants suits are.
Cons:
First, it isn't practical to train it on multiple things at once unless they're related. For instance, you might want to remind it what a woman is while you're training it on women's pants suits. This would probably work okay. But if you wanted to teach it a bunch of wildly different things their prompts all wind up with the same weight and you'll get results that bleed together. For instance if I try to teach it 'Jesus' and 'NASCAR' then it would probably throw something like Jesus driving a car. And I mean it will do this fairly often, so it can work against you as well as in your favor.
Training multiple subjects with large batches of images is similarly unreasonable with DreamBooth because it's shared among many users. Paying customers get first dibs on the training resources, and if too many people are trying to connect you can wind up with your training session interrupted. If you time out the module is unusable. DreamBooth takes a really long time to train (think 12 hours on average) and timing out is a regular thing, so it can get frustrating. Small batches run slightly faster, which is why I really do recommend it for small batches of images devoted to one specific thing. Then just merge the module.
What we're now running is Stable Diffusion with the training module on the same machine. It's considerably faster and more versatile, but I don't speak Python so I can only pester Mr. Birds until he comes and explains it. I will say that teaching it a style requires big batches of images and takes a while, but it does in fact learn.
Here's some 1925 Packards rendered in Tracy J. Butler's style from my last post. One of the reasons I picked her is because she's not on the list of known artists as far as I know. I'll be back later to talk about what happens if you try to train further on an artist it knows once we've tried it. We're going to use Boris Vallejo. Currently training for Dorian Cleavanger (who I also believe is missing from the known artist list).
Training where you find a blank spot is really fun because you can see the results of the training more easily. This seems to also true of DreamBooth.
