- Joined
- Jun 3, 2023
Pretty, yet creepy.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
He has a point tho, for all the white knighting the fact is most women in porn choose that because its easy money, specially now in the camwhore era when you don't even have to do sex scenes and can get away with just some nudity as long as you get some loyal paypigs.We point fingers over who's to blame or we can actually do what's within our power to roll back the damage unlimited porn has and will cause. Quit consuming porn and contributing to that market so these leeches stop making money, tell anyone who asks that porn is gross regardless of how it's made, and carefully teach your kids what a healthy relationship is and how to strive for it.
And to make sure this is actually on topic, giving boys access to unlimited AI porn right as they reach puberty will collapse civilization. Don't give your kids smart phones.
How long it takes to do a 2048 square?Just fine. At 512 square I get 1 to 2 iterations a second. Make sure to install xformers.
Well its clear adobe its shitting themselves because AI is going to destroy their bottom line.Thoughts?
They wont, most are malding about this and demanding luddite-levels of technological restrictions.Hopefully, artists see through this.
Not really creepy, more like uncanny valley-ish. The girls look less real asian and more "korean plastic surgery overload".
Which model did you use? This looks Uh-Ma-Zing. As for the prompt, is there a specific artist to be listed to get this result?
The Stability AI team is proud to release as an open model SDXL 1.0, the next iteration in the evolution of text-to-image generation models. Following the limited, research-only release of SDXL 0.9, the full version of SDXL has been improved to be the world's best open image generation model.
Heard a little about this, but didn't realize it was already available. It's interesting. Seems like it is an improvement on the old base, though I'm not experienced enough with it all to really know the finer details about performance. Not really sold on it being much better with hands like the blog says, but that might just be my luck. If anybody wants to give it a quick shot, you can use the clipdrop site mentioned in the blog. You get a handful of gens before they ask you to make a free account/sign in with whatever, and there's a queue, but it's a pretty simple way to give it a look.
impressive, but i still dont know how youre actually supposed to download these things and make them work. nor how to get into other stable diffusion shit like controlnet or whatever else there is i can use with a111
There isn't a viable model yet to do 2048 text2pic with the accuracy of stable diffusion 1/2/XL. SDXL just leaked and it was trained on 1024. Even if you do the upscaling pic2pic hack you are going to run into issues with VRAM unless you are running a 4080(Cutting it super close and need it headless) 3090/4090 or a Tesla p40.How long it takes to do a 2048 square?
There's a link to a github with step by step instructions for getting the environment set up, which also links to huggingface to get the actual model. huggingface has their own internal git-like download page for each model, where the actual model is denoted by a red box icon. You have to have an account to download it. It's a bit convoluted.impressive, but i still dont know how youre actually supposed to download these things and make them work. nor how to get into other stable diffusion shit like controlnet or whatever else there is i can use with a111
Sounds interesting with better recognition in regards to vague prompts, but I'll stick with SD 1.5 for the time being, as there are a buttload of checkpoints that a user can work with.
I've been throwing things at a H100 and yeah it just makes models eat shrooms faster. You can stretch it to about 1.3x what the model was trained on before it just collapses.There isn't a viable model yet to do 2048 text2pic with the accuracy of stable diffusion 1/2/XL. SDXL just leaked and it was trained on 1024. Even if you do the upscaling pic2pic hack you are going to run into issues with VRAM unless you are running a 4080(Cutting it super close and need it headless) 3090/4090 or a Tesla p40.
Interesting, but what about those paid AI sites that offer that resolution for premium subs? I seen those posted on reddit and were that resolution, what are they using to do it?There isn't a viable model yet to do 2048 text2pic with the accuracy of stable diffusion 1/2/XL. SDXL just leaked and it was trained on 1024. Even if you do the upscaling pic2pic hack you are going to run into issues with VRAM unless you are running a 4080(Cutting it super close and need it headless) 3090/4090 or a Tesla p40.
They're probably upscaling a 512x512 with Real-ESRGAN or Remacri or something. I don't have any issues with getting a 2048 or 3000px image through upscaling, but yeah, no way I can get a raw generation that size with txt2img.Interesting, but what about those paid AI sites that offer that resolution for premium subs? I seen those posted on reddit and were that resolution, what are they using to do it?
About 24 sec per iteration. Basically, to make big images, I get the small image the way I want, upscale and then image to image to clean it up.Also:
How long it takes to do a 2048 square?
What's your setup again?About 24 sec per iteration. Basically, to make big images, I get the small image the way I want, upscale and then image to image to clean it up.
IIRC Leonardo uses the 768 and 1024 models but yeah it's just upscaling.They're probably upscaling a 512x512 with Real-ESRGAN or Remacri or something. I don't have any issues with getting a 2048 or 3000px image through upscaling, but yeah, no way I can get a raw generation that size with txt2img.