- Joined
- Jun 17, 2021
My last GPU got fried during a power surge, so I've been using an RX 570 while I waited for GPU availability to get better. I'm still waitlisted for a V100, so until that purchase goes through I'm using cloud computing.Is there a specific reason you are not running it locally? From my testing you can basically run it on GPU cards that are 2 generations old. With obvious speed setbacks. But manageable.
Yeah, that's a problem for me. I've used FP16 models in the past, but I often have cases where I want to pull characters from model X and put them into Style Y (Nyxborn Pokemon, me as a Phyrexian, etc).If you are not planning on combining models to make your own models or to train your own model using these models, then getting fp16 models basically has no drawback and it can cut the size in half.
I’ve started downloading all the models and loras I want to try because a lot are getting removed. TOS changes, legal issues, artists getting mad and demanding it taken down, FUCKING EXCLUSIVITY DEALS where they sell the model to AI generating websites and remove it everywhere like what happened recently with the Illuminati model (the cocksuckers) etc. So I download everything I think looks cool, save a PDF of the download page so I have info about triggers and best settings, and then store it away.

Once enough phones on the market are able to run SD without connecting to servers, it's all over for free high-end models. I already see it in my mind's third eye, you're gonna have apps like Wombo buying off well-trained models and selling them as in-app purchases.
Voice.ai's releasing their STS service on mobile this month, so I'm willing to bet Midjourney and the other vultures are trying to get this tech running locally on mobile devices.