Stable Diffusion, NovelAI, Machine Learning Art - AI art generation discussion and image dump

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
I added some /g/ friend's tips for the NovelAI model if anyone needs.
https://rentry.org/robs-novel-ai-tips

Also added to OP.

Thoughts on invisible Stable Diffusion watermark? I'm unsure which string code is it so I could remove and contribute to OP? I read that Stable Diffusion has variety safety checks and said watermark.

With little coding making a uncucked reverse engineered Stable Diffusion repo would be possible.
 
Here's another illegal use case for Stable Diffusion I just thought of, forging signatures on checks. All kinds of potential bank fuckery, actually. You could probably use it to print money too.

Wait, I remember hearing about how this can be easily defeated, but I don't remember the specifics. Like AI art and how any trained artist can see something is AI generated, I think people who handle signatures can find little imperfections on the signatures to tell a robot did it.
 
There's reports of full power usage causing damage to GPUs, it's reccomended you add "--medvram" to your command arguments for stable diffusion if you haven't already

I did like the guide mentioned slightly, personally I don't mind 10% speed loss 2080 RTX has 22 GB memory to work with. I personally never peaked 80%

The induced damage comes from insufficient cooling and using consumer cards.

Also baseless fear mongering is very real possibility since Jewgrounds, Futfagginty and Getty banned AI art due to MUH ETHICS.
 
Wait, I remember hearing about how this can be easily defeated, but I don't remember the specifics. Like AI art and how any trained artist can see something is AI generated, I think people who handle signatures can find little imperfections on the signatures to tell a robot did it.
Fake signatures are usually mechanically made. A good test is holding the signed check upside down and compare it to a verified signature.

For robotic signatures, though, the main sign is there's going to be an ink dot at the start and end of each line, and a more uniform consistency with the strokes across the signature.
 
Fake signatures are usually mechanically made. A good test is holding the signed check upside down and compare it to a verified signature.

For robotic signatures, though, the main sign is there's going to be an ink dot at the start and end of each line, and a more uniform consistency with the strokes across the signature.
Isn't the whole point of Stable Diffusion that it's virtually indistinguishable from the real thing, or is that not really true?
Also baseless fear mongering is very real possibility since Jewgrounds, Futfagginty and Getty banned AI art due to MUH ETHICS.
Newgrounds' entire design ethos is based around competition, complete with cash prizes on occasion. They treat art, animation and games as a sport. Why should a track runner be forced to compete with a motorcycle?

You can still post your Stable Diffusion pictures here in this thread all you want, it's not like anybody could or would stop you.
 
Isn't the whole point of Stable Diffusion that it's virtually indistinguishable from the real thing, or is that not really true?
Are you assuming people would print out fake checks, or that people would upload photos of fake checks. Because the latter could be avoided by only allowing live photos of the check, which I believe is something most banks already do.

Newgrounds' entire design ethos is based around competition, complete with cash prizes on occasion. They treat art, animation and games as a sport. Why should a track runner be forced to compete with a motorcycle?

You can still post your Stable Diffusion pictures here in this thread all you want, it's not like anybody could or would stop you.

Speedrunners solved this issue years ago: Have all participants livestream their progress during the competition to show they're not just using AI to generate their work.

And let's not pretend that this is too demanding of a task. If teenagers can livestream a minecraft speedrun using a laptop with intel graphics, an artist can easily record their photoshop window and put it on Twitch or YouTube.
 
Last edited:
There's reports of full power usage causing damage to GPUs, it's reccomended you add "--medvram" to your command arguments for stable diffusion if you haven't already
Undervolt your GPU. underclocking a bit also works along with it to be mindful about temperatures. get MSIafterburner.

watercoolinglet's stay loosing.
 
  • Like
Reactions: The Ghost of Kviv
Has anyone else had trouble getting inpainting to work? Each time I feed in an image, it just returns the same image back, no matter what settings I change or how I draw the mask. Seems to happen on each model I try, too. When using "inpaint at full resolution", I also get: ValueError: Coordinate 'right' is less than 'left'. Not sure if it's an issue with my installation or a module or what.
 
I think in terms of actual artistic use, from how i've seen artists draw, they usually open up some sort of layered program like Krita and then do different things in different layers.

A potential marketable quality of this would be to have the program replicate the same thing. Like, integrate it directly into Krita, then have it generate the prompt layer-by-layer. This would allow a human artist to go in and clean up the weird shit.

I'm running blank prompts (masterpiece, highest quality, absurdres, (sharp focus:1.2), by __artist-surreal__), which rendered this. Notice that the stick or whatever on the right blends into the hair. That's something a human could correct with layers.

View attachment 3747298
I've been using this for a few days in krita
 
Just a few more wallpapers I've been working on. All made with SD 1.5 and just text2img. Pretty impressed with what can be made without even getting into img2img or inpainting.

00019-4213978399-(ultra-realistic), ((masterpiece)),(quality),(highres), ((4k)), ((DLSR)), ((H...png00027-1775726770-(((oil painting on canvas, masterpiece, best quality, 4k))), ((cabin in snowy...png00031-745513512-(((painting, masterpiece, best quality, 4k, village))), ((field, clouds, trees...png
 
Speedrunners solved this issue years ago: Have all participants livestream their progress during the competition to show they're not just using AI to generate their work.

And let's not pretend that this is too demanding of a task. If teenagers can livestream a minecraft speedrun using a laptop with intel graphics, an artist can easily record their photoshop window and put it on Twitch or YouTube.
My point is that there is no reason in getting mad about Stable Diffusion being banned from a site that treats art as a competition, since autonomously generated images aren't a demonstration of the skill or ability of the author. Why would you care if you're not invested in that?
 
  • Thunk-Provoking
Reactions: A Hot Potato
Most people think Kiwifarms should be shut down for "LiTteRaLlY K͕͚̭͎͕̯̰̦̳̮̙̘I̠̳̘̖̝͉̟̺̹̹͕̹̪̙͎̺̱L̞̱̼̮͎̟͚̙̩͓L͚̘͕̜͇͕̩͖̺̻̜͚̱͕͚͔̺ͅͅI̱͍͈̗̱̻̱N͉̗̤͇̙̭̠̦̬̯̭̻͇͈G̼͓̥̣̤̩͇̫̩̮̩̬͈ͅ TRANSFOLK!!!".



I think Kiwifarms should be shut down because it's been overrun by coombrained anime degenerates and vtuber simps.
 
Most people think Kiwifarms should be shut down for "LiTteRaLlY K͕͚̭͎͕̯̰̦̳̮̙̘I̠̳̘̖̝͉̟̺̹̹͕̹̪̙͎̺̱L̞̱̼̮͎̟͚̙̩͓L͚̘͕̜͇͕̩͖̺̻̜͚̱͕͚͔̺ͅͅI̱͍͈̗̱̻̱N͉̗̤͇̙̭̠̦̬̯̭̻͇͈G̼͓̥̣̤̩͇̫̩̮̩̬͈ͅ TRANSFOLK!!!".



I think Kiwifarms should be shut down because it's been overrun by coombrained anime degenerates and vtuber simps.
You ever notice how none of the buildings in the backgrounds of these have doors?
 
There's reports of full power usage causing damage to GPUs, it's reccomended you add "--medvram" to your command arguments for stable diffusion if you haven't already
As a poorfag is there a decent chance this will still cook my gpu? I can't afford the insane card prices these days if I George Bush this whole thing.
 

Attachments

  • 1666409778089.png
    1666409778089.png
    213.6 KB · Views: 73
  • Thunk-Provoking
Reactions: Vecr
I've been using this for a few days in krita
I downloaded that, and already have a base understanding of drawing, yet I'm simply too lazy to learn Krita to make use of this. I wish they released a plugin like this for Clip Studio Paint. Actually, I did some googling, and it looks like they have, but only the Japanese copies of CSP can use plugins? IDK, I'll just draw it in CSP and then put it in Krita.
 
This is 100% being done already for instrumental music. IIRC some lawyer was trying to use an algorithm to copy claim every melody conceivable as a PR stunt to end copyright sharks. Some guy on youtube has trained a machine to generate Djent songs and didn't even need a neural map. Music is all patterns with pretty clear cut rules and it's all highly iterative/branching which is the stuff the AI is actually good at. If anyone knows of any open source stuff for doing this I would be interested in checking it out. Or even just good sound libraries/software, I've been thinking of getting into programmatic jam track creation so I can work on composition more and then get baked and solo overtop.

Using AI to make fucknig Djent is maybe the first and foremost reason to ban AI technology. That and the Skynet future we're careening towards.

This shit wont end well.
 
Any recommendations for the sampling steps or seed?
Steps are something you need to experiment with; there's usually diminishing or no returns beyond a certain point but that point depends on the sampler. Euler a/Euler is usually good around 30, but might improve up to 50 or 60. DPM adaptive is slow but can get great results with just 10 or 20 steps, and usually doesn't improve beyond 40 or so.

Seeds are something you gotta mine for, nobody can guess what seeds will work well with a given prompt. It's the RNG seed used for noise generation etc which your image is resolved from. Do batch jobs to get a bunch of outputs (probably with low-ish step counts for speed), and if you like one in particular copy its seed and try more steps/tweaks on that.
I highly recommend turning on "save txt files with full parameters" or whatever its called in the settings tab so that you can easily grab old seeds/prompts, although I think that's less necessary now that the History tab has been implemented.

CFG scale is more what you should be thinking about fiddling with. 5.5 or 11 are good loose values, I also usually try 20, 25, 28 and even 30 with carefully-composed prompts. Results can vary a lot by sampler as well; often there are a couple of key points where different CFG ranges will give you distinctly different images--and these can be similar between samplers but converge at different scales (Euler a is fairly linear, Euler gets stuck at the shallow end, and DPM adaptive speedruns convergence while also being great at sorting out anatomy, to use the examples I gave. But DPM2 Karras and others can also give great results depending on style).

To find these sweet spots you can try X/Y plots with whatever combination of steps, CFG scales, or samplers you want to compare. Enter comma-separated values or ranges you want to try. For example, CFG = "5,5.5,6,6.5,7" is the same as "5-7(+0.5)". For testing things like samplers/model checkpoints it's simply a list of sampler names (copy/paste from the radio list) or model ckpt filenames minus the ".ckpt".
If you have a seed entered it'll use that for all tests, if you have -1 entered it'll pick a random one and use that for all of them. If you check "keep -1 for seeds" it'll make them all random. It's less good for direct comparisons but I actually use that a lot because I prefer X/Y grids over batch jobs when experimenting.
 
Last edited:
I've been playing around with NovelAI's hosted version because I had some free credits, and I have to say I'm not that impressed. It seems to struggle a lot with anything that hasn't (I'm guessing) previously appeared in one of the Chinese cartoons it was trained on. Anything involving an action, or requiring multiple objects related to each other in some way, seems very difficult. For example to get X in Y, or for Z to put X in Y. Is this a case of prompt magic or what? Training the model specifically on pre-existing images like the one you want would presumably do much better of course.
 
  • Informative
Reactions: A Hot Potato
Back