Stable Diffusion, NovelAI, Machine Learning Art - AI art generation discussion and image dump

I'm not certain if this is the right thread for my inquiry but it seems to be closest so here goes.
What is the best way to upscale and clean a low resolution video? I have a couple of animations from late 2000s that are either completely ruined by YouTube's compression algorithm or were in shitty to begin with. I'm interested in some sort of an automatic restoration of these animations using AI.
I know that Stable Diffusion uses BSRGAN to upscale the generated images and that I can use a tool like Cupscale to do that with videos, I am just wondering if there's something better out there. I'm interested solely in upscaling, I don't want to do any weird stuff like converting the animations to 60 FPS or something. The only requirement is that it is possible to run the tool locally.
I'd be thankful for any bit of information that will point me in the right direction.
 
I'm not certain if this is the right thread for my inquiry but it seems to be closest so here goes.
What is the best way to upscale and clean a low resolution video? I have a couple of animations from late 2000s that are either completely ruined by YouTube's compression algorithm or were in shitty to begin with. I'm interested in some sort of an automatic restoration of these animations using AI.
I know that Stable Diffusion uses BSRGAN to upscale the generated images and that I can use a tool like Cupscale to do that with videos, I am just wondering if there's something better out there. I'm interested solely in upscaling, I don't want to do any weird stuff like converting the animations to 60 FPS or something. The only requirement is that it is possible to run the tool locally.
I'd be thankful for any bit of information that will point me in the right direction.
I've had some success with Topaz Video AI, but unless you sail the seven seas, you're going to have to shell out a substantial sum of money.
 
I'm not certain if this is the right thread for my inquiry but it seems to be closest so here goes.
What is the best way to upscale and clean a low resolution video? I have a couple of animations from late 2000s that are either completely ruined by YouTube's compression algorithm or were in shitty to begin with. I'm interested in some sort of an automatic restoration of these animations using AI.
I know that Stable Diffusion uses BSRGAN to upscale the generated images and that I can use a tool like Cupscale to do that with videos, I am just wondering if there's something better out there. I'm interested solely in upscaling, I don't want to do any weird stuff like converting the animations to 60 FPS or something. The only requirement is that it is possible to run the tool locally.
I'd be thankful for any bit of information that will point me in the right direction.
I use Waifu2X, which is a freemium upscaling program that runs locally and is highly configurable. It's always worked well for me when upscaling images, and I've never needed any of the paywalled features, but I've never used it for videos before.

I grabbed a random video from the videos thread for a test, and upscaled it on medium and high quality presets, to 2x resolution, with no frame interpolation. The starting video is 31s long, 550x720, 846KB. The upscales are 1100x1440. The medium quality video took 50s to convert and is 3.6MB. The high quality video is 4.4MB and took 25m 45s to convert, and my fans were super loud the whole time. I can't upload either of them because they're MKVs, which aren't supported on KF, and the option to change the output format is a premium setting. Here they are on catbox instead: medium, high.
Original, medium, high.
vlcsnap-2025-09-05-16h35m20s522.webpvlcsnap-2025-09-05-16h35m06s417.webpvlcsnap-2025-09-05-16h34m44s328.webp
 
 
Feel me.

If someone pointed a gun to my head, one of the anti-ai dunces, and told me if I didn't besmirch AI I'll be dead--I'd probably besmirch AI buuuuuuut, it would be pretty fucking difficult!
 
All these great leaps forward and here I am just on Bing making PS2/GameCube era images for a current rp game
f.webp
b.webpc.webpd.webpe.webpg.webph.webpi.webpk.webpq.webpr.webps.webpt.webpv.webpx.webpy.webpz.webp
Super long prompt:
bad video game screenshot, gritty, low resolution, PlayStation 1 style graphics, 2000s video game, low poly, 3D video game, nostalgia, early 3D gaming, low graphics, low texture, smooth texture, 2001, low quality, poor texture, pixelation, old game, low detail, poor PS1 quality video game screen grab.
 
All these great leaps forward and here I am just on Bing making PS2/GameCube era images for a current rp game
I won't say same but I get it. All the more realistic ones feel really dull. I don't want more realistic or more detailed or anything like that, I just want something better able to understand complex prompts, maybe slightly bigger too so it actually has the space to do those complex prompts. Every single realistic or semi realistic model I've tried has just been like yea ok cool that's where the technology is at nowadays but I'm back off to the shit that actually looks good.
 
I just want something better able to understand complex prompts
Someone figured out a way to use an LLM as a text encoder. Apparently it greatly improves prompt adherence and lets you combine natural prompting with tags. Sounds like tech that'll only get better as time improves. Should work with any and all checkpoints you throw at it, but the authors are animefags, so their preferred content is probably what'll work best.
 
A sincere question, from someone pretty ignorant about this stuff: Is there any AI thing that can take a video that cuts abruptly and extend the actions shown a couple of seconds? Or that can take a clip that's in the day and color grade it into night?
 
If you missed the eldritch amalgamates of early models, good news: Pony Diffusion v7 is out after getting thousands in donations to train it and it generates... this.
1760211417213.png1760211430808.png1760211443111.png1760211453549.png
Money well spent!

border collie gigachad.png
Ponysissies stay losing.
Noobchads keep winning.

And yes I used that one Gigachad image for ControlNet Depth guidance.
1760213677484.png
And yes I was so lazy I did everything with post-processing nodes.
 
If you missed the eldritch amalgamates of early models, good news: Pony Diffusion v7 is out after getting thousands in donations to train it and it generates... this.
Nah based. The disgusting body horror from the original stuff was really good. I want creepy unsettling shit in a resolution above what I can count on my hands. Even if pony is just always shit at everything even in it's current state.
 
Back
Top Bottom