- Joined
- Jul 25, 2019
Do you read? Its not crashing. Everything works fine but the previews and you can carry on without restarting the program.It's likely just shitting the bed, I've heard other people say the 1080ti will crash on SD
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Do you read? Its not crashing. Everything works fine but the previews and you can carry on without restarting the program.It's likely just shitting the bed, I've heard other people say the 1080ti will crash on SD
Anything past 2.0 removed celebrities I think. There was some other stuff about changing the dataset, but I don't remember. There are tons of independent and derivative checkpoints to use, so it doesn't really matter at this point. SD isn't as good as a more tailored checkpoint for most things.So what's this I have been hearing about Stable Diffusion 2.0 neutering itself for some reason? Hard to find decent talk about such things.
Either inpaint or using the same seed?View attachment 4452656
trying for consistency in character design with different facial expressions, initial attempt went above and beyond my expectations
Same seed, just different prompt. That's why the earring, clothing, and tattoos are different in all three.Either inpaint or using the same seed?
Right now the go-to is generating an image in like ~500x500 or 700x700p and then upscaling it, and given how the models are trained and gpu requirements you should just look for a two part process until the next gen of GPUs come out. Basically it's a consumer-level hardware issue AND a model issue, cause even top models are trained on <800p.How close are we to a net that can generate 1080p images on demand, maybe even up to 4k? In the same time window it takes current shit to make a 1000x1000 image, specifically. I know it's technically possible with some insane hardware, I mean for a few dollars a month or on a desktop gpu. Is the current resolution "cap" for consumer level image gen a hardware issue, a software limit, or a bit of both? I also don't mean upscaling to that res, rather raw output.
Edit: any suggestions as to an uncensored service that can dump higher than 1080p even if it's slow, that would work for me in the meantime.
Why are the top models being trained on such relatively low resolutions? Would there actually be an issue trying to generate smaller images out of it at the same speed as one trained like that?Right now the go-to is generating an image in like ~500x500 or 700x700p and then upscaling it, and given how the models are trained and gpu requirements you should just look for a two part process until the next gen of GPUs come out. Basically it's a consumer-level hardware issue AND a model issue, cause even top models are trained on <800p.
As far as I understand, while producing images takes a decent amount of VRAM, training models takes far more, and as you increase image size you're dealing with the inverse square law - i.e. doubling the image size means 4x the VRAM etc.Why are the top models being trained on such relatively low resolutions?
Not too sure I understand the question. Making smaller images is super easy. You could generate it at the regular size then scale it down, or inpaint in (or even straight-up generate) an image at desired scale.Would there actually be an issue trying to generate smaller images out of it at the same speed as one trained like that?
My user-level understanding is that it is a Vram limitation more than processing speed. Ask for too big of images and you run out. The released models are the ones correctly sized to run on home computers.Why are the top models being trained on such relatively low resolutions? Would there actually be an issue trying to generate smaller images out of it at the same speed as one trained like that?
Anything past 2.0 removed celebrities I think. There was some other stuff about changing the dataset, but I don't remember. There are tons of independent and derivative checkpoints to use, so it doesn't really matter at this point. SD isn't as good as a more tailored checkpoint for most things.
Lol. I played with that retarded shit for a while. It's definitely going to be the basis for frivolous lawsuits in the near future.
TL;DR;
Losers create "bot" that can detect Ai art.
Bot doesn't detect ai work. Literally, it cannot detect if an image was ai generated? What's the fucking point then?
Bot scans the image and does a "Similar too search" close to what google and tineye use and then it declares that almost every image inserted into it wasstolenATTRIBUTED TO from another artist that had nothing to do with it. The hilarious part is, if you have link preview turned on, you can see the CONNECTING TO DEVIANT ART, etc. in the loading bar in the bottom left hand of your screen.
I inserted my dream diary image from a few pages back and wouldn't know?
Absolute ai forgery! Witchcraft! No, it's just retarded.
And absurd, largely unnecessary amounts of VRAM in consumer cards (24 GB, probably 32 GB soon) directly benefits the off-the-shelf AI user.One thing I've just now realized.
AI was put into man's reach because of gaming and crypto. Innovation in GPU's was necessary to get the critical mass processing to make AI systems viable.
The knowledge that it's AI generated makes it more disposable to me. All these artcels freaking out is schizo because I'm never going to stop appreciating and consooming their shit.