Stable Diffusion, NovelAI, Machine Learning Art - AI art generation discussion and image dump

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
I've been trying to run stable diffusion myself, however when I run webui-user it always ends in error code: 1, in which it tells me that Torch is not able to use GPU. This doesn't make sense to me as I have a Nividia GPU, a GeForce GTX 1060. I must admit to being coding illiterate, so if someone here could tell me what's going on I'd be very grateful.
 
I know I called myself lazy but there is a lot of people who are lazy or just not paying attention in this thread. The number of people who go and use the wrong bot for the wrong thing and then blame the bot is exasperating. From basic research I've determined:

Dall-E is for generally anything but is extremely limited access and highly censored.
Midjourney is for fantasy, horror, and psychedelic things, mostly in traditional and digital painting style. It is also limited access.
Craiyon is Dall-E but worse in everything but cartoons and vidya, where it strangely excels but has an absolute inability to draw faces. It is less censored than Dall-E and free.
Stable Diffusion is for photorealism and paintings and it is very bad at drawn mediums like anime unless you put in specific modifiers like artist names and styles. It is currently uncensored and free.
Novelai is for weebs and was built off of Danbooru, so functions off of their tags with a specific skew toward being women-focused, but is worse at mimicking copyrighted works than the others. It seems to also be uncensored, but is not free unless you get the leak and run it on your own hardware.

Perhaps with this guide people will start using the correct bot for the right thing and not fighting over 'it's only good for animu' 'oh yeah but its bad at animu'. Also: it doesn't hurt to go take some time to read up on tutorials and figure out the bot's language before you judge the quality of the bot. It's not the bot's fault if you can't speak its language and it gives you an image of gibberish because you spoke gibberish to it.
 
Newgrounds' entire design ethos is based around competition, complete with cash prizes on occasion. They treat art, animation and games as a sport. Why should a track runner be forced to compete with a motorcycle?
I actually agree with this whole heartedly. I feel like AI should be used in the art process, but I think its legitimately cheating to synthesize a picture, and then put it against art made naturally. It'd be like if I had a robot leg, and I ran track with normies. By all means, ban AI art in competitions (unless its an AI art competition thing), I just think that's the correct choice.
 
If you have the leak and webUI (naifu has been left in the dust by this point)
In the naifu front-end there is a quality tags field to fill in. Is that the same as negative prompts? I had trouble getting that torrent to work correctly, the front-end loads, but it wants an older version of cuda than I have installed (or something). Is there a reason to believe it could be stealth pickled, or is that just a general warning of the possibility?
Screenshot from 2022-10-22 13-21-57.png
Here is the first prompt (tiedye, wikihow dog) I used before I tried to recreate the benchmark image and realized there was something wrong with my setup and had to fine tune. Even when it is broken, it's still pretty cool.
tiedye_dog.png
I like this space horse. A horse wearing an astronaut suit is not something I ever considered before. I saw someone else post a picture of an astronaut riding a horse, I tried to make one too.
smrqixq5gm.png
In relation to my earlier post, this is the closest to Cartman I could manage. Its just in time for the halloween special.
cartman_nah02.png
I'll figure it out eventually.
 
I've been thinking, will the ease of making unathentic/faked digital information eventually turn people towards analog media, since it's harder to edit convincingly and will be seen as more trustworthy? Maybe the ML development will be kind of a blessing in disguise if people assume that everything on the Internet is fake and become more interested in real life experiences, handmade objects and human interaction?
I don't know, 10 years ago it would be unbelievable to hear what's happening nowadays, and it's pretty scary to think what the world will be like in 2030. No prediction is ridiculous enough anymore.
 
Last edited:
I like this space horse. A horse wearing an astronaut suit is not something I ever considered before. I saw someone else post a picture of an astronaut riding a horse, I tried to make one too.
A horse in a full astronaut suit would look ridiculous and hilarious.
 
In the naifu front-end there is a quality tags field to fill in. Is that the same as negative prompts? I had trouble getting that torrent to work correctly, the front-end loads, but it wants an older version of cuda than I have installed (or something). Is there a reason to believe it could be stealth pickled, or is that just a general warning of the possibility?
I mean you're replying to me telling you not to use naifu so go back a step
I've been trying to run stable diffusion myself, however when I run webui-user it always ends in error code: 1, in which it tells me that Torch is not able to use GPU. This doesn't make sense to me as I have a Nividia GPU, a GeForce GTX 1060. I must admit to being coding illiterate, so if someone here could tell me what's going on I'd be very grateful.
So both of you should make sure you've followed this: https://rentry.org/voldy#-guide- (Edit: and the troubleshooting section.)
A 1060 is kinda old right so make sure you have "set COMMANDLINE_ARGS=--medvram --precision full --no-half" in your webui-user.bat. A batch file is a txt file, so the answer is notepad if you don't know how to edit a bat. You should know this but I've had to tell people.
If you're still getting torch/cuda issues then go to the pytorch website and follow the installation instructions. Those will probably include cuda kit setup instructions. Probably webUI should do that for you if you've installed it right but I dunno I already had everything for ML shit.
 
Last edited:
I've been thinking, will the ease of making unathentic/faked digital information eventually turn people towards analog media, since it's harder to edit convincingly and will it be seen as more trustworthy? Maybe the ML development will be kind of a blessing in disguise if people assume that everything on the Internet is fake and become more interested in real life experiences, handmade objects and human interaction?
I don't know, 10 years ago it would be unbelievable to hear what's happening nowadays, and it's pretty scary to think what the world will be like in 2030. No prediction is ridiculous enough anymore.
Absolutely. In a world where distributed manufacturing is advanced enough and allows anyone to create and assemble, say high quality furniture. Coveted will be the man who learns a skill and made something entirely by hand.
I feel this would be especially true in a post-scarcity scenario in which anything could be created by machines. There would be an intrinsic value placed upon people who learn a skill "the old fashioned way"
I feel the same would go for human experiences. Traveling somewhere with close friends. Instead of entering a convincing simulation with them on the internet. Forgoing experiences that could only be shared via virtual reality of course.
 
Anyone know if there's a way to queue up a list of seeds? Doing a bunch of low iteration images and picking good ones to iterate on. Currently I'm pulling them in one at a time from file names.
Yeah use Seed on one axis of an X/Y plot (the other axis can be 'nothing' and you can ignore the resulting grid itself) with a comma-separated list of seeds.
 
I don't like where this is going.
Most of you nerds are using this to generate females and obvious masturbatory material.
this shit should be haram and you should be pushed by society and everyone around you to chase real females, in real life, in clubs, on the streets, at the job and everywhere else, instead of using your GPUs for fucking anime waifus.
Stop being such fucking weebs and make an effort to wake up
 
I don't like where this is going.
Most of you nerds are using this to generate females and obvious masturbatory material.
this shit should be haram and you should be pushed by society and everyone around you to chase real females, in real life, in clubs, on the streets, at the job and everywhere else, instead of using your GPUs for fucking anime waifus.
Stop being such fucking weebs and make an effort to wake up
Read the OP, kiwifarms is the PREMIER place to discuss use of AI imaging for non-coomers.
 
I've been thinking, will the ease of making unathentic/faked digital information eventually turn people towards analog media, since it's harder to edit convincingly and will be seen as more trustworthy?
I think it'll force people to find the truth themselves, and motivate people to use AI to defeat AI. This would be considered the "good ending" IMO since it doesn't take much money to find your own information in America, and AI being seen as a tool (and only a tool) can only help humanity. I just hope they don't do some dumb shit and make AI robots that can turn against us.
 
The anime based model packs result in produced images that are all literally just one female character. If you are looking for actual interaction between two characters, its not really going to work.
While I don't know if it helps with interaction (still having a bit of trouble getting fight scenes), but generating multiple characters appears to be helped a lot by adjusting the size of the image generated. The basic 512x512 can occasionally put 2, but I've seen it occur more frequently if you adjust the image into a landscape, which the NAI model supports well enough without using the hires fix. I think it's a legitimate criticism that it mostly defaults just anime women standing around, though. That's just the nature of the beast due to the training data. Hopefully more advanced models in the future are capable of more variety through more varied data.
 
I know I called myself lazy but there is a lot of people who are lazy or just not paying attention in this thread. The number of people who go and use the wrong bot for the wrong thing and then blame the bot is exasperating. From basic research I've determined:

Dall-E is for generally anything but is extremely limited access and highly censored.
Midjourney is for fantasy, horror, and psychedelic things, mostly in traditional and digital painting style. It is also limited access.
Craiyon is Dall-E but worse in everything but cartoons and vidya, where it strangely excels but has an absolute inability to draw faces. It is less censored than Dall-E and free.
Stable Diffusion is for photorealism and paintings and it is very bad at drawn mediums like anime unless you put in specific modifiers like artist names and styles. It is currently uncensored and free.
Novelai is for weebs and was built off of Danbooru, so functions off of their tags with a specific skew toward being women-focused, but is worse at mimicking copyrighted works than the others. It seems to also be uncensored, but is not free unless you get the leak and run it on your own hardware.

Perhaps with this guide people will start using the correct bot for the right thing and not fighting over 'it's only good for animu' 'oh yeah but its bad at animu'. Also: it doesn't hurt to go take some time to read up on tutorials and figure out the bot's language before you judge the quality of the bot. It's not the bot's fault if you can't speak its language and it gives you an image of gibberish because you spoke gibberish to it.
Just say fuck it, merge all the models. Except the hardcore porn ones. When used solo, they're harmless. When merged, several three letter agencies would want to have a word with you.

I don't like where this is going.
Most of you nerds are using this to generate females and obvious masturbatory material.
this shit should be haram and you should be pushed by society and everyone around you to chase real females, in real life, in clubs, on the streets, at the job and everywhere else, instead of using your GPUs for fucking anime waifus.
Stop being such fucking weebs and make an effort to wake up
Generating fap bait got boring, I've moved on to improving my own art and cheating concept ideas for my clients (lmao), this fucking thing has saved me hours of actual work and ironically has increased my effort since now, I have to merge my ideas with the ai's ideas.

Honestly, out of the 40 threads across various websites I'm in, this is the only one that hasn't devolved into OOOOOOOOOOHHH HOW DID YOU DO THAT and endless cooming. The future of Ai on Kiwifarms needs to be split into two or three threads.

-Technical thread, for people who want to do it/ trouble shooting/ how to generate XYZ consistency and so on.

-A lone sharing /request thread, to keep it contained. People get egotistical when they don't share prompts and you can see that on a lot of the porn sites going nuts over Ai generation.

-Actual discussion thread where people air their greviances, talk about the current drama (The CEO is talking shit and censorship) , community watch style.
 
After working for a few hours I made this. Issue Im running into atm with img2img is it softens the image each time you run it so eventually you end up with something too soft if unless thats your end goal.

Start;
View attachment 3757076


Finish;
View attachment 3757077
There was/is a bug where if you enable Xformers (default on, if supported), it will degrade the output each time. As of a couple of days ago they were saying that it could not be fixed.

00006-1006638788.png
They say AI cannot make art, how to explain this?
 
I had to uninstall and reinstall a couple of times to get through the winerrors but the guide did mention that in the troubleshooting steps so that was no real issue. 1060 3GB here, I had a go at getting it to generate some fantasy forests with creepy monsters and I love it. The creepy lovecraftian horror in the sky was supposed to be a dragon made of plants like Mordremoth but I wanted something otherwordly and it delivered that. This will be a great tool for some concept art for story telling and game jams.
 

Attachments

  • tmppqrjpaqy.png
    tmppqrjpaqy.png
    431.7 KB · Views: 15
Back