Schpoots
kiwifarms.net
- Joined
- Feb 5, 2023
This is the first time AI has kind of scared me. If I wasn't for the people vanishing into the stall or walking thru the fence at the end I wouldn't be able to tell.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Oh cool! Time for more man-made horrors beyond my comprehension!OpenAI introduced SORA (Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions.) today on their Twitter thread - archive. You can visit it here.
View attachment 5725089
Here is an example. The prompt is: "Beautiful, snowy Tokyo city is bustling. The camera moves through the bustling city street, following several people enjoying the beautiful snowy weather and shopping at nearby stalls. Gorgeous sakura petals are flying through the wind along with snowflakes."
View attachment 5725088
OpenAI introduced SORA (Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions.) today on their Twitter thread - archive. You can visit it here.
View attachment 5725089
Here is an example. The prompt is: "Beautiful, snowy Tokyo city is bustling. The camera moves through the bustling city street, following several people enjoying the beautiful snowy weather and shopping at nearby stalls. Gorgeous sakura petals are flying through the wind along with snowflakes."
View attachment 5725088
You know it'll only get better and better, right?Just focus on the subtle issues with videos to identity if they are AI made or not (height issues, distortions, strange angling, clipping), the bad part is this stuff will most likely be ironed out as time goes on which will make everything even harder to recognize.
We've only been able to prove things with photos/videos for about 100 years. We're just going back to how it used to be.Yeah, can't believe anything you haven't seen with your own eyes anymore.
When this gets open-sourced we’re all fucked. There is going to be so much weird porn the coomer brains will finally fry permanentlyHere is an example. The prompt is: "Beautiful, snowy Tokyo city is bustling. The camera moves through the bustling city street, following several people enjoying the beautiful snowy weather and shopping at nearby stalls. Gorgeous sakura petals are flying through the wind along with snowflakes."
I've always figured that's going to go the same way the inevitable sex bots will -- it'll just finally permanently remove the coomers from society entirely where they can't do any real harm. When you can just tell your computer to produce high-quality porn of any kind to your exact specifications, you won't need to piss away money on OnlyFans (so it'll help kill that parasocial relationship crap) or paid porn anymore. Sex bots will be for the more "adventurous" coomers who want to go beyond watching a screen.When this gets open-sourced we’re all fucked. There is going to be so much weird porn the coomer brains will finally fry permanently
Yes, you can run it at home already now but I'd give it a few days. On a A6000 the generation takes a few secs (not instant but close to) per picture at around 12 GB of VRAM. It also has such sights to show you.1) Can I run this locally
2) How much VRAM and processing power is needed to run it compared to SDXL and SD1.5
3) Is it the same level of open to training to generate anything as SDXL and SD1.5
A lot of stock footage repositories for youtube videos are going to dissapear very soonHere is an example. The prompt is: "Beautiful, snowy Tokyo city is bustling. The camera moves through the bustling city street, following several people enjoying the beautiful snowy weather and shopping at nearby stalls. Gorgeous sakura petals are flying through the wind along with snowflakes."
View attachment 5725088
My main gripe is can it offer me better results than SD1.5 with the same generation time as SD1.5 on my poor shitty 1060, or am I perpetually stuck in the SD1.5 limbo until I get a modern GPU around never.Yes, you can run it at home already now but I'd give it a few days. On a A6000 the generation takes a few secs (not instant but close to) per picture at around 12 GB of VRAM. It also has such sights to show you.
View attachment 5725355View attachment 5725356View attachment 5725358View attachment 5725359
It's a lot more coherent than when I tried XL last which admittedly was a while ago. It's *almost* like it understands the prompts actually, instead of just throwing random things at you that are somewhat related to the words you wrote. It also is a lot less trending towards *accidential* body horror, I even saw the one or other plausible keyboard. According to stable this is a very early version of things to come.
If you bring the VRAM. It might get optimized down the road, who knows at this point but the trend will always be towards more high-bandwidth RAM. This is also how they will effectively keep people indrectily from running powerful AI applications at home.Am I right?
Apparently it is. There is already work being done to implement it into ComfyUI, but the bigger hurdle will be having people training new models for it.Guess I'll be waiting for another month before I consider Stable Cascade. Is it compatible with SD 1.5 LoRAs?
>OpenAI1) Can I run this locally
2) How much VRAM and processing power is needed to run it compared to SDXL and SD1.5
3) Is it the same level of open to training to generate anything as SDXL and SD1.5
Re-read the post you're responding to, and look at what I was referring to.>OpenAI
>Open source
Pick one. Apart from google, this company might be the shittest of them all when it comes to model availability.