Stable Diffusion, NovelAI, Machine Learning Art - AI art generation discussion and image dump

Cybertranny concept art looking things and a funny. I tried making cyberpunk I caint breeve shit but this is also good.

cybertranny shit.png
download (4).png
i ant sneed.png
 
i have bought a 3060 and hopefully more vram will be quite helpful in doing the ai arts
might even consider training the ai off of certain artstyles cause its really bad at some
card arrived yesterday. there is a vast improvement from the 1060 as i can now get images generated in like 30 seconds at 90 steps in 512x512 and i cumb
 
Can you post pics of stuff generated on it? I'm too lazy to download and try myself. I've spent all day upscaling all the photos I use for wallpapers.
google it; it's not for lewds and that's all I generate so I have nothing for you
the depth transfer mode is extremely cool (I can tell it's a lot more versatile than you might think from the name) but there are already other projects implementing it (like https://github.com/Extraltodeus/multi-subject-render which I haven't tried yet) so it'll probably make its way in as standard soon and then we can have a good time
 
  • Informative
Reactions: A Hot Potato
Here's something that might be useful that I've been meaning to post btw: https://huggingface.co/datasets/Nerfgun3/bad_prompt
The idea is to turn standard neg prompts into an embedding so that you can just drop a single tag in there. From my experimentation the effect is extremely strong and gives a distinctly clean look to things, but fucks up fragile prompts that nail i.e. a specific character so ymmv a lot. Unfortunately the author doesn't list what exactly went into that prompt-wise which makes it hard to work around but I can see it being very handy if you're just interested in aesthetic nonspecific gens
 
google it; it's not for lewds and that's all I generate so I have nothing for you
the depth transfer mode is extremely cool (I can tell it's a lot more versatile than you might think from the name) but there are already other projects implementing it (like https://github.com/Extraltodeus/multi-subject-render which I haven't tried yet) so it'll probably make its way in as standard soon and then we can have a good time
Yea I just finished reading about it on 4chan. I'll wait for other models that are compatible with it to come out so I can play the autism game with merging models to make beatiful waifu's in higher resolution. Possibly unlock NSFW to be a degenerate coomer with it.
 
I assume it won't merge with existing stuff due to the differences (although I don't fully grok CNNs, and know nothing about U-net architecture, although I can also see how it might based on limited knowledge? So this is a non-opinion basically and I'm talking out my ass) but I could see it at least being useful for e.g. backgrounds or doing an img2img pass on your illegal boobies if you're going for some particular intricate look. But yeah I'm just gonna wait and see what develops too.
Old SD already sucked at NSFW so while a lot of models were based off it (the multi-level perceptual training is still useful, which I don't feel like explaining due to currently being wasted) that particular aspect can't have been all that valuable, so maybe it won't matter.

Regarding higher res you can already get great results with webUI's hirez fix feature, which first generates a standard res image and then treats it as an img2img noise basis for higher resolutions. I haven't gone below 800px since getting my new card. I'm legit not sure how much the upgrade would really matter except in very specific cases.
 
There's something deeply enjoyable about the cat being out on the bag on this one and the tech pearl clutchers not being able to gatekeep. Neither in the infrastructure sense, nor in the knowledge sense. You just know they'd love to. Even if they'd outlaw it tomorrow, it's out there. Can't put the toothpaste back into the tube!

I think the only way for AI stuff not being abused to hell is it being in the hands of everyone, period. If every five year old can generate a photorealistic picture of Trump/Biden eating a baby, it loses all it's power for these "fake news" purposes they're so concerned about. If only a select few have that kind of access, guess what they're going to use it for. This is also what it really is about, not about "safety" and "ethics". They're worried about the great equalizer that makes every news source so believable, you just have to stop believing all of them and do your own thinking. Does it makes *sense* for Trump/Biden to be eating a baby? You know, probably not.
 
Last edited:
So I take it the way this tech works is that the AI is a "weak AI"* that's shown a bunch of images, and then imitates the techniques used by NI** to draw something?

*(Weak artificial intelligence - Wikipedia)

**(natural intelligence)
If "strong AI" exists right now, it's being kept a secret, and it's a partly biological man-made HORROR.

I think "imitates the techniques" is inaccurate. It's just finding patterns in random noise by default, but is heavily influenced by user input and parameters, and other constraints such as inpainting and negative prompts. Those help guide it into producing results that are what you want and not random. "Imitating the techniques" sounds like if it was painting a Van Gogh one brush stroke at a time.

This is the CSI Enhance meme but instead of reconstructing what it "thinks" is a faithful representation of a low resolution image, it puts artists out of a job. But you will also likely use the CSI Enhance immediately afterwards to get a lower resolution result to look much better.

 
Back