- Joined
- Aug 5, 2022
Prompt magic, but that particular thing is notoriously challenging because it's not something captured prepositionally by tags in the danbooru dataset.I've been playing around with NovelAI's hosted version because I had some free credits, and I have to say I'm not that impressed. It seems to struggle a lot with anything that hasn't (I'm guessing) previously appeared in one of the Chinese cartoons it was trained on. Anything involving an action, or requiring multiple objects related to each other in some way, seems very difficult. For example to get X in Y, or for Z to put X in Y. Is this a case of prompt magic or what? Training the model specifically on pre-existing images like the one you want would presumably do much better of course.
It's not something I'd waste online credits on. If you have the leak and webUI (naifu has been left in the dust by this point) then it's possible if you want to spend time exploring prompt tricks using big ass grids with some of the slower solvers, but more practically I'd just use inpainting for this. Just gen something more vanilla then mask the needful for your sick fetish shit.