- Joined
- Dec 12, 2015
Stable Diffusion, Dreamshaper 8 model, automatic webUI. Just used img2img mode. Didn't do any manual editing because I didn't feel like it.What program did you use to achieve that effect?
Well, the AI wants to do its thing and add details that weren't there prior and move stuff around. Basically, make it more dramatic.How can AI extend THIS photograph?

So instead, I'm leaving the original image mostly intact; I'm not even messing with the dimensions. Instead, I'm using outpainting to extend both sides of the image. I might be getting better results if I were using a photorealistic model instead of an artistic one...but on general principles, I don't do photorealistic images.



As you can see, this model really wants to make a much more detailed hillside with more interesting things going on, and you can fairly easily see where we move from the original image to the AI generated. It's also not matching the cloud type very well. But it isn't doing a terrible job of extrapolating what the continued slope of the hill might be, and sometimes it does a decent job matching the original color gradients of the grass/sky/clouds.
But if we were to use a model that's been trained on realistic boring landscapes instead of more dramatic and full ones, it might do a much better job. Some photoshoping likely still required for best results. Admittedly, this isn't something I generally do at all, and I also haven't updated the software itself in some time.
TL;DR - outpainting is still kind of a crapshoot or I don't know what I'm doing. Best used on things that are also AI generated.