Stable Diffusion, NovelAI, Machine Learning Art - AI art generation discussion and image dump

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Disney and Pixar Present:
Cool Cat Saves The Kids
coming to theaters and IMAX, and on Disney+, spring 2023

coolcat.jpgcoolcat2.jpg
 
It really was bound to happen eventually anyways. Maybe best to get it over with in the pioneer days. The tears will be delicious.

The place where I realized AI has evolved past the novelty of things like deepdream was when a) AI services for normal people sprung up that also managed to actually survive on their prices b) these AI haters appeared. As much as these artists belittle the AI art, they wouldn't be so malding if they didn't feel threatened by it's quality. Where were they in earlier (and significantly less impressive) iterations of the technology? Exactly.

E: Reading that lawsuit it is incredibly clear they have absolutely no idea how SD works. Grifters gonna grift.
Even if all the lawsuits and lobbying succeed, it will most affect the venture capital people and censored models. Open source versions will continue to develop and spread around.
 
Can someone feed a prompt from this post into AI? Specifically:
orange-brown-black space occupied by a giant inflated kiwi-bird with ruffled, long tassel-like feathers and a long curved slim black beak, and the beak curved off into infinity. The space was like a corridor, with undulating grasses at the top and bottom, and the kiwi somehow inside it, but the sides were open and the corridor stretched off into infinity and I moved toward the kiwi and it felt beautiful, amazing, with loud buzzing, ringing sounds accompanying the journey.
 
Today I made the mistake of updating the web UI, with the newer versions completely reworking the hires feature. Rather than revert, I've been messing around with it to try to it working as desired (like pre-update). I've managed to get it producing coherent images again, but I'm running into a new issue that wasn't present in the old build; every upscaler I can use is either shit and produces low quality/blurry outputs or very noticeably reduces the contrast in the output image, resulting in the colors being comparatively washed out. This is something I noticed when upscaling under Extras previously and used Photoshop/GIMP to bump the contrast back up, but now it's doing so every time I generate anything hires which is kind of annoying. Currently downloading LDSR to test as an upscaler to see if that helps at all, but has anyone else updated and encountered this yet, and if so, any ideas or solutions?

Edit: Turns out the issue was related to VAEs; the new version changed how it looked for VAEs in the directory system and my habit of dumping all of the VAEs in the VAE folder wasn't where it was looking for them anymore. So if you update, double check and make sure its still properly loading VAEs.
 
Last edited:
So I wanted to test if the AI knew Mona Lisa and fair enough I got a nice example (with a anime model)

tmpkb48li74.jpg


I also wanted her to hold a Kiwi fruit but that didn't work at all. Do you have a suggestion how to do that? The prompt I used was simply Mona Lisa holding a kiwi fruit but I simply got another protrait. Maybe it was the model I used
 
So I wanted to test if the AI knew Mona Lisa and fair enough I got a nice example (with a anime model)

View attachment 4278149

I also wanted her to hold a Kiwi fruit but that didn't work at all. Do you have a suggestion how to do that? The prompt I used was simply Mona Lisa holding a kiwi fruit but I simply got another protrait. Maybe it was the model I used
Assuming the model you're using knows what a kiwi fruit is, try increasing the weight on the kiwi fruit prompt, like (kiwi fruit:1.4) where values over 1 are increased weights and under 1 decreased weights.
 
AI knows Mona Lisa to well.

I'm already impressed you got something besides the old Mona L. shoved in there, and the picture already has a deep-fried vibe. I'd try strongly weighting against Mona Lisa. You might get lucky but don't expect being able to stray too far from common motives like this.
 
So I wanted to test if the AI knew Mona Lisa and fair enough I got a nice example (with a anime model)

View attachment 4278149

I also wanted her to hold a Kiwi fruit but that didn't work at all. Do you have a suggestion how to do that? The prompt I used was simply Mona Lisa holding a kiwi fruit but I simply got another protrait. Maybe it was the model I used
You aren't going to get any proper nightmare fuel with the weeb modeling, friend. Here is what I coaxed out of it:

"Mona Lisa holding a kiwi bird, pop art, bright colors"
grid-0031.jpg

"Mona Lisa with a huge grin holding a kiwi bird, pop art, bright colors, youthful"
grid-0032.jpg

I noticed the latest update to Stable Diffusion shows progressive steps of the image being willed into existence now as it's generating it. Kinda neat!
 
I noticed the latest update to Stable Diffusion shows progressive steps of the image being willed into existence now as it's generating it. Kinda neat!
That's been a thing for quite a while; the new update just makes it on by default instead of off. That said, it does very noticeably slow down image generation so I tend to leave it off.
 
Apologies for doubleposting - here's my thoughts on the update thus far. With the new implementation of the hires fix in the Web UI, you get a lot more flexibility in how the image is generated. Steps are now split between the lowres and hires renders, allowing for much easier and greater variations on a theme if you happen to find a seed you like as that's now a parameter you can play with in addition to the denoising strength. By just changing the steps, hires steps and denoising strength you get essentially a near endless amount of similar images.
00048.jpg00052.jpg00054.jpg
That said, if you're making higher resolution images make sure to keep a high enough step count and denoising strength to properly sharpen and tidy up the image after the upscaling step; amounts vary based on the sampler you're using.
 
Apologies for doubleposting - here's my thoughts on the update thus far. With the new implementation of the hires fix in the Web UI, you get a lot more flexibility in how the image is generated. Steps are now split between the lowres and hires renders, allowing for much easier and greater variations on a theme if you happen to find a seed you like as that's now a parameter you can play with in addition to the denoising strength. By just changing the steps, hires steps and denoising strength you get essentially a near endless amount of similar images.
View attachment 4279685View attachment 4279688View attachment 4279690
That said, if you're making higher resolution images make sure to keep a high enough step count and denoising strength to properly sharpen and tidy up the image after the upscaling step; amounts vary based on the sampler you're using.
Curious what vae/embed/lora/prompt you used to get this result.

Or did it start with an existing image?
 
Curious what vae/embed/lora/prompt you used to get this result.

Or did it start with an existing image?
Prompt:
masterpiece, best quality, hires, (hyperrealism:1.5), (realistic:1.5), (photorealistic:1.5), 1girl, arctic fox ears, arctic fox spirit, detailed face, white dress, straight white hair, perfect face, perfect eyes, mythical, elegant, highly detailed, intricate detailed, volumetric lighting, Ukiyo-E, by Anne Stokes and Noriyoshi Ohrai, full length, portrait
Negative prompt:
lowres, bad eyes, mismatched eyes, empty eyes, different eyes, bad anatomy, bad fingers, bad hands, text, error, missing fingers, extra fingers, extra hands, extra limbs, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, caption, anime, description, three legs, three arms, anime, pointed ears, human ears, visible ears
Settings:
Steps: 40, Sampler: DDIM, CFG scale: 7, Seed: 2906950385, Size: 640x448, Model hash: 8712e20a5d, Model: Anything-V3.0, Denoising strength: 0.4, Hires resize: 1408x960, Hires steps: 50, Hires upscaler: SwinIR_4x
Using Anything V3.0 and its associated VAE. That's all txt2img mode. Variations were mostly just playing with the two step sizes.

Edit: Prompt probably has a lot of nonsense the model doesn't recognize; it was heavily inspired by this page from the Stable Diffusion Prompt Guidebook by OpenArt.
prompt.jpg
 
Last edited:
Today I made the mistake of updating the web UI, with the newer versions completely reworking the hires feature. Rather than revert, I've been messing around with it to try to it working as desired (like pre-update). I've managed to get it producing coherent images again, but I'm running into a new issue that wasn't present in the old build; every upscaler I can use is either shit and produces low quality/blurry outputs or very noticeably reduces the contrast in the output image, resulting in the colors being comparatively washed out. This is something I noticed when upscaling under Extras previously and used Photoshop/GIMP to bump the contrast back up, but now it's doing so every time I generate anything hires which is kind of annoying. Currently downloading LDSR to test as an upscaler to see if that helps at all, but has anyone else updated and encountered this yet, and if so, any ideas or solutions?

Edit: Turns out the issue was related to VAEs; the new version changed how it looked for VAEs in the directory system and my habit of dumping all of the VAEs in the VAE folder wasn't where it was looking for them anymore. So if you update, double check and make sure its still properly loading VAEs.
Yeah. I'm having the same issue. I've been browsing Civitai and following the porn advancements and before I continue, let me say. Jesus . Fucking. Christ. These coomers do not fucking sleep. They've made leaps and advancements in pornography that should not be possible at this point. I'm still trying to wrap my head around the new LORA options. Anyway, thanks for the tip. My VAEs weren't working and I was getting that horrible burned color image.
 
Back