Stable Diffusion, NovelAI, Machine Learning Art - AI art generation discussion and image dump

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
I've been playing around with NovelAI's hosted version because I had some free credits, and I have to say I'm not that impressed. It seems to struggle a lot with anything that hasn't (I'm guessing) previously appeared in one of the Chinese cartoons it was trained on. Anything involving an action, or requiring multiple objects related to each other in some way, seems very difficult. For example to get X in Y, or for Z to put X in Y. Is this a case of prompt magic or what? Training the model specifically on pre-existing images like the one you want would presumably do much better of course.
Prompt magic, but that particular thing is notoriously challenging because it's not something captured prepositionally by tags in the danbooru dataset.
It's not something I'd waste online credits on. If you have the leak and webUI (naifu has been left in the dust by this point) then it's possible if you want to spend time exploring prompt tricks using big ass grids with some of the slower solvers, but more practically I'd just use inpainting for this. Just gen something more vanilla then mask the needful for your sick fetish shit.
 
Fake signatures are usually mechanically made. A good test is holding the signed check upside down and compare it to a verified signature.

For robotic signatures, though, the main sign is there's going to be an ink dot at the start and end of each line, and a more uniform consistency with the strokes across the signature.

Not to mention fingerprints, unique handwriting, varying intensity of the ink.

Checks use other methods to prevent fraud altogether.
Isn't the whole point of Stable Diffusion that it's virtually indistinguishable from the real thing, or is that not really true?

Newgrounds' entire design ethos is based around competition, complete with cash prizes on occasion. They treat art, animation and games as a sport. Why should a track runner be forced to compete with a motorcycle?

You can still post your Stable Diffusion pictures here in this thread all you want, it's not like anybody could or would stop you.

Good thing you can't tell that it's AI art if a human adds his own hand and treat AIs like tools.

Survival of the fittest situation. Adapt or die.

Or stop being retarded and stream art competitions so far AI is so fast that the torching process is faster than human eye. Right now I have my GPUs doing multiple frames and layers per second

Even now tracers who are very human are admitted to competitions alongside plagiarists. Which is human problem to which people found solutions and workarounds.

I just mentioned few workarounds to tell the authenticity of real art VS AI generated art at time of writing current AI does not save, record or keep WIP pictures.

Adapt or get culled. I have been teaching my AI to correct its mistake, work process and the code. Flaws are glaringly obvious.

Only thing that is at fault is human stupidity and Tom Fulp's dicksuckers do not lack that.
 
Maybe don't. I just did because all the comparisons I'm seeing are cherry-picked waifu shit, but then I re-ran a test I did on sdv14 yesterday and got dogshit. Possibly because I'd nigger rigged it with a stack of prompts to force a result I wanted so maybe it'd be fine with more tailored prompts but I'm probably gonna re-download 1.4 just to have on hand.

Yesterdays test on 1.4:
View attachment 3753899View attachment 3753901View attachment 3753902
Exact same settings/seeds on 1.5:
View attachment 3753900View attachment 3753903View attachment 3753909
Thanks for the wallpaper. I used the second one on the top. Hope you don't mind if I use your machine learning OC. If I wasn't a retard. i'd make my own, but I'm to stupid to run StableDiffusion. Hell, I couldn't even resize the wallpaper to properly fit my shitbox laptop for some reason.
 
  • Feels
Reactions: A Hot Potato
Thanks for the wallpaper. I used the second one on the top. Hope you don't mind if I use your machine learning OC. If I wasn't a retard. i'd make my own, but I'm to stupid to run StableDiffusion. Hell, I couldn't even resize the wallpaper to properly fit my shitbox laptop for some reason.
No problem.

Regarding those tests: I thought I'd use them to try upscaling and maybe gen some more tentacle hentai world wallpapers, but I've discovered I can't get the same result from the seed even with the original 1.4 checkpoint I was using.
I checked the model hashes and everything I could think of, even using 11.0 rather than 11 CFG in case it was a float thing, but even with perfectly matching parameters I can't reproduce the original gens.

So I'm thinking there's a webUI bugfix at play here, because I know they patched stuff relating to hires reproducibility and shit on different cards around then.

Basically, everyone should disregard that 1.4/1.5 comparison I posted.

(Although the basic point of hanging onto your 1.4 checkpoint still stands because at least I can get some tentacles appearing with the old model while the new one just gives dirt, but it's not night and day like I thought going by that test.)
 
All I'm hearing is if I want to make deviant shit I should stay on 1.4 for the time being. 🤔
No you should absolutely use the NAI leak for that.

I tried investigating some more and found weird shit like renaming the model and reloading getting me closer to the original image. That shouldn't happen, and renaming it to the original filename did nothing, so I don't know what the fuck. Either it's really a webUI bug or my card is melting.

Anyway, here's some sorta hydralisk in a sombrero
02119-.jpg
 
Has anyone else had trouble getting inpainting to work? Each time I feed in an image, it just returns the same image back, no matter what settings I change or how I draw the mask. Seems to happen on each model I try, too. When using "inpaint at full resolution", I also get: ValueError: Coordinate 'right' is less than 'left'. Not sure if it's an issue with my installation or a module or what.
Just in case somebody else stumbles into the same issue (and I hate when people just post that the fixed something without explaining why), it looks like it's a known issue caused by browser extensions. I don't know which one exactly caused it, but if you got any scripts going on, try clearing them all off and see if works.
 
It could be prompts mate. Try this, you can enter what you want and it'll translate it into something more understandable for the ai, pretend the ai is completely fuckin retarded and you got spell out every itty bitty detail.
I abuse the fuck out of it.
Edit- Don't be afraid to fuck around with the sampling, steps, denoising yadda yadda.
This advice helped a lot, though it still struggles with getting human anatomy right. Aside from stuff like hands which are understandably hard for the ai, it just doesn't seem to understand how humans work yet. I've had a couple of amazing images ruined by a limb being contorted in some strange way.

More abstract images, landscapes, etc are all amazing.

All in all, its still way better than the average deviantart coomposter, so I see why they're worried.
 
Just in case somebody else stumbles into the same issue (and I hate when people just post that the fixed something without explaining why), it looks like it's a known issue caused by browser extensions. I don't know which one exactly caused it, but if you got any scripts going on, try clearing them all off and see if works.
I've heard it's adblockers. Which I think I read ITT. You never really want any of that shit on for localhost anyway; I dunno why it's not the default for a lot of plugins.


Have some meat world evolution:

That's supposed to be the same seed as the hydralisk one so idk what the fuck is going on; my seeds seem to become unpredictable (with that specific model at least? which I realised isn't actually the full 1.4 but that hasn't been relevant yet) every time I reload checkpoints.
...but I just now updated webUI and now it's acting even weirder with that one, so I think that's the last I'll be able to play with this prompt. I give up, it was probably luck that it was working at all.
Edit: I think I might have figured it out but yeah, these'll always be non-reproducible so my comparison was broken.
 
Last edited:
I've heard it's adblockers. Which I think I read ITT. You never really want any of that shit on for localhost anyway; I dunno why it's not the default for a lot of plugins.


Have some meat world evolution:
View attachment 3756577
That's supposed to be the same seed as the hydralisk one so idk what the fuck is going on; my seeds seem to become unpredictable (with that specific model at least? which I realised isn't actually the full 1.4 but that hasn't been relevant yet) every time I reload checkpoints.
...but I just now updated webUI and now it's acting even weirder with that one, so I think that's the last I'll be able to play with this prompt. I give up, it was probably luck that it was working at all.
Edit: I think I might have figured it out but yeah, these'll always be non-reproducible so my comparison was broken.
Some amateur animator is shitting theirs pants after watching Tyranid infested mars.

Don't sell yourself short, fren.
 
So I installed everything and I have to say, while the technology is impressive, I don't see much use for this as a generator of hentai.

The anime based model packs result in produced images that are all literally just one female character. If you are looking for actual interaction between two characters, its not really going to work. Also I found that about 90% of the images generated have anatomical issues or other uncanny valley issues, like really dead looking faces. Very few actually look like some of the good examples found in this thread. If you want to make really generic looking images of anime girls standing around looking pretty, then sure knock yourself out, but its pretty limited and requires a lot of trial and error.
 
So I installed everything and I have to say, while the technology is impressive, I don't see much use for this as a generator of hentai.

The anime based model packs result in produced images that are all literally just one female character. If you are looking for actual interaction between two characters, its not really going to work. Also I found that about 90% of the images generated have anatomical issues or other uncanny valley issues, like really dead looking faces. Very few actually look like some of the good examples found in this thread. If you want to make really generic looking images of anime girls standing around looking pretty, then sure knock yourself out, but its pretty limited and requires a lot of trial and error.
Got any examples? It doesn't have to be coomer shit, just generally what you described in being limited.
 
Null raised a future issue with information regarding pictures. Faking pictures of individuals in act of doing crime, nasty shit etc.
There's a potential stopgap solution here I want to raise that could prevent this for a while. Potentially allowing us to still have verifibly real images after this develops further. With a technology that already exists.
Photographic Film.
Film is still produced today. And as a medium won't be going anywhere for many more decades. An artist wants a choice between oil and canvas and a Wacom tablet.
Why film? The negatives.. You have physical evidence of your image. While you could feasibly generate an image and take a picture of a screen. The resolving power of even low end consumer grade film can show in a high quality scan or with a loupe onto the negative itself the pixels of a screen. Even ones with a high pixel density with my own experiments in making enlargements of images from varying digital screens show this.

In the dark future we might see a serious photojournalist running around with a 40+ year old SLR for a verifiable image. Or a sperg with a point and shoot.
Maybe cyberpunk 2020 wasn't that far off with microfilm cameras inside of your cybereyes. I'm curious about your thoughts on this.
 
So I installed everything and I have to say, while the technology is impressive, I don't see much use for this as a generator of hentai.

The anime based model packs result in produced images that are all literally just one female character. If you are looking for actual interaction between two characters, its not really going to work. Also I found that about 90% of the images generated have anatomical issues or other uncanny valley issues, like really dead looking faces. Very few actually look like some of the good examples found in this thread. If you want to make really generic looking images of anime girls standing around looking pretty, then sure knock yourself out, but its pretty limited and requires a lot of trial and error.
Give it time, the technology has only been in circulation for a month or two. Assuming hosting sites don't start cracking down on unsanctioned models, in a year or two's time we're gonna see some terrifyingly convincing shit. I remember barely a month ago when Mindjourney was at the forefront of the technology, Mindjourney couldn't even do faces.
 
In the dark future we might see a serious photojournalist running around with a 40+ year old SLR
That shit still happens every now and then. There are still those types that don't consider you to be a real photographer unless you do film. You'll pry the 3565mm film from their cold dead hands.
 
Anyone know if there's a way to queue up a list of seeds? Doing a bunch of low iteration images and picking good ones to iterate on. Currently I'm pulling them in one at a time from file names.
 
Back