- Joined
- Aug 7, 2022
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Thought: Generate a pile of dead furries for one layer, one or more furry running towards the center of the image on another, and then the Matt Ward OC on a final layer. I know photoshop has subject selection which could potentially make things easier, and then you can do multiple passes of img2img at low noise density to clean things up.I have also gotten into Comfy UI. It is indeed comfy once you figure out how everything works. What I want to figure out how to do is multiple subject workflow. What I would like to do is a Warhammer epic. The lower portion would consist of Ultramarine's killing furries while the upper portion would be a space battle with lots of lasers.
Photo of cgyl person, beautiful, young, teenager, forest backgroundforgot to note the prompt
AMD has fumbled the ball on graphics compute for years, let's call it 15 years, and it will cost them big time.And why the Hell aren't AMD the company doing more to facilitate this on their chips? Seems like a major failing on their part the way this is taking off.
IIRC Stable Diffusion makes use of CUDA, which is an API/parallel computing platform created and maintained by NVIDIA. I know some sort of interpretation layer/code conversion process exists, but I have absolutely zero knowledge of how it performs.I want to get into trying my hand at some of this stuff. How much of a nightmare will it be given I have an AMD GPU? I've done a little playing around and found a website with a guide for using Stable Diffusion with AMD. Followed instructions, was able to run it and get some results. Nothing like people are producing here though. The instructions were quite specific to one particular model and a little old. Just to give me a general impression, what is the state of things doing this on AMD?
And why the Hell aren't AMD the company doing more to facilitate this on their chips? Seems like a major failing on their part the way this is taking off.
There is at least one way of converting. As I mentioned in my post I have done it. But it wasn't a universal solution. It was a mildly complex sequence of instructions for a specific model. I'm going to back and see if I can figure out how to use the instructions in a more general way, but I was trying to get a feel for if there was a reasonable proportion of people using AMD cards and how it's done; or if I was in some weird niche dead-end.IIRC Stable Diffusion makes use of CUDA, which is an API/parallel computing platform created and maintained by NVIDIA. I know some sort of interpretation layer/code conversion process exists, but I have absolutely zero knowledge of how it performs.
It's roughly like running Windows software on a Mac. You can, given you've taken the extra steps necessary but it won't be as efficient as just running the program on a Windows machine instead. I would assume someone somewhere is working on versions specifically designed for AMD hardware but I've never gone looking for it as I've got an NVIDIA GPU.
So I feel like there's three main reasons why AMD has been getting fucked in the whole AI race:There is at least one way of converting. As I mentioned in my post I have done it. But it wasn't a universal solution. It was a mildly complex sequence of instructions for a specific model. I'm going to back and see if I can figure out how to use the instructions in a more general way, but I was trying to get a feel for if there was a reasonable proportion of people using AMD cards and how it's done; or if I was in some weird niche dead-end.
It's starting to look like buying an AMD card was a fundamental mistake. It's a very powerful card - 20GB VRAM, current gen. So I'm less worried about some performance hit than I am about my own time in figuring out how the fuck to make it work if I even can.
I'd say that if you really wanted to get into this stuff and spend a lot of time playing or messing around with it then getting an AMD GPU was probably not the best choice you could have made (both for speed and stability/just getting it working). I know I'll soon be stalking the used GPU market for something with a little more oopmh than my card from 2016.It's starting to look like buying an AMD card was a fundamental mistake. It's a very powerful card - 20GB VRAM, current gen. So I'm less worried about some performance hit than I am about my own time in figuring out how the fuck to make it work if I even can.
100% agree on getting a used Nvidia GPU. I'm planning on using my tax return to snag a second hand V100 for some more high-end oomph.I'd say that if you really wanted to get into this stuff and spend a lot of time playing or messing around with it then getting an AMD GPU was probably not the best choice you could have made (both for speed and stability/just getting it working). I know I'll soon be stalking the used GPU market for something with a little more oopmh than my card from 2016.