Stable Diffusion, NovelAI, Machine Learning Art - AI art generation discussion and image dump

But these things are basically photo mashing others people work to get these results
Really not quite how it works, also all artwork is derivative. Every single artist drawing big-boobed anime girls didn't come up with the art style and the concept of big boobs in a vacuum all by himself. He saw it somewhere else. Can you draw a sunset without ever having seen a sunset? We don't work all that differently. Everything an artist draws is influenced by what experiences that mushy thing in his brainbox collected over his lifetime, including looking at the art of other people. It'd be highly dishonest to claim that all human art is inherently original, especially with such very strongly and clearly defined artstyles like anime. And art is also something highly evolutionary that carried through all of human history and went to places by artists being influenced what they saw their predecessors do.

You might even eventually make an argument for the AI generating art with more originality and diverseness than a human ever could, because of the sheer amount of collected lifetimes of experiences and perspectives and knowledge it has. These things go into a philosophical territory and a lot of the intuitive revulsion some humans feel of the creations by the machines I think still comes from that mindset that humans are something inherently special that's not like anything else. What if that simply isn't the case?
 
Last edited:
The only people I'd say are in danger of losing work to AI are concept artists for big companies who will be more willing to use an AI to create their concept art for movies and games, or hire a wrangler to tell the AI what to do for much cheaper and faster than an actual artist could make. People who are using AI over commissioned art aren't people who were part of the commissioning market in the first place either because they don't care about working with and supporting artists or they couldn't afford commissions. The consumer loss is negligible. Besides, most of the people I see using AI art are doing it for fun or are brainless coomers who need to see 600 AI generated eldritch holes on anime girls in 3 seconds in order to get off.
I think there are many different tiers of commissioned art and it is a mistake to lump it all together.
Skilled or recognized artists have nothing to fear from this, they sell their art but also their signature on the art.
Someone selling their original drawings for their batman cover is not going to be affected.
Neither are real artsists that do comission, because I think people buying these are not just interested in buying a pretty picture to hang on the wall abut also knowing that it was drawn by their favorite artist, and it is signed.

But there are also other lower tiers of commissions. And these guys are gonna get fucked.
These are the unkown "artists" that just do cheap commissions of "pretty picture" or "dog furry fucking rabbit furry in oval office".
For those, why should someone pay 50$ to them when they can just pay 1$ to some guy in India use an AI to create the picture for you.

TL;DR If you do commissions that sell based on your name and fame, you are going to be fine. If you are a noname that just does furry porn commissions, you just got replaced by a shell-script.
 
Last edited:
How did you run my user pic through it? Is it not just text prompts, or can you submit actual images to change style?

Also fashionable af ngl
You can submit images, but I'll warn they ain't all winners.
1666137734237.png
Look at the weird-ass muppet hand.
 
I have no idea what docker is or how to use it. Is it easy to install this thing through that?
 
Trying this out and the art's a little surreal. Is my prompt too generic or does NovelAI need more training?

1666135770974.png
Positive prompt:
Code:
Schoolgirl running down a street wearing backpack with a loaf of bread in mouth

Negative prompt:
Code:
lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry

Sampling steps: 20
Sampling method: Euler a
CFG scale: 12
Seed: -1
Here's advice that's so simple that it's dumb: Try adding keywords like "high quality" "highly detailed" "finely detailed" "masterwork" etc. to your prompt. It'll pull in vectors containing more high quality sources. "portrait" "bust portrait" etc. also work surprisingly well for people. Also don't be shy about adding artist names which are incredibly strong vectors in SD and will change the whole mood of the picture. "oil on canvas" etc. works well too. Capitalization doesn't matter, the AI doesn't have a concept of it. This also works in the inverse for the negative prompt, which btw. has no real limit/overhead by the nature of how it works. Be careful though, as everything with AI is subtle, sometimes you might pull in/block things you didn't really mean to just because the AI made a tentative connection. Photography keywords like "zeiss" "35mm" etc. also work pretty well.

View attachment 3748555

The face is AI generated, the movement was made with DeepFaceLive. (Not my work)
Do commas have any effect on the prompt? I don't know if there's a difference between a long sentence versus strings demarcated by commas (tags?).

I think I've found three potential promising avatars for Corona-chan after Spooktober.
coronachanavatar.pngcoronachanavatar2.pngcoronachanavatar3.png

I also found what Jeffrey Dahmer would look like in anime:
1666140007074.png
 
Last edited:
Do commas have any effect on the prompt? I don't know if there's a difference between a long sentence versus strings demarcated by commas (tags?).
The part actually converting the text into what the AI "understands" is really, really "dumb" - that's why the whole thing is efficient enough to run on midrange GPUs to begin with. (SD is first- and foremost very efficient, it could certainly be better even with current tech) NAI AFAIK (I don't have an account) did some work to it and it understands simple grammar in short sentences better than what the opensource SD model at the current version understands as a result, but smiliar changes will probably find their way into the next SD version. Short sentences definitively work with both and can sometimes work better than a list of tags separated by commas. The commas do have a noticeable effect into giving the AI better context to work it and should be added.

so <short description of scene>,<list of tags seperated by commas> is a pretty good combo. The way this gets translated, the words at the beginning of the prompt actually will have a bigger influence, so it might be worth it to actually start with the tags and then add the sentence. A list of only tags without any grammar can work too though and what actually works best depends on the subject matter, so some experimentation is necessary.
 
The part actually converting the text into what the AI "understands" is really, really "dumb" - that's why the whole thing is efficient enough to run on midrange GPUs to begin with. (SD is first- and foremost very efficient, it could certainly be better even with current tech) NAI AFAIK (I don't have an account) did some work to it and it understands simple grammar in short sentences better than what the opensource SD model at the current version understands as a result, but smiliar changes will probably find their way into the next SD version. Short sentences definitively work with both and can sometimes work better than a list of tags separated by commas. The commas do have a noticeable effect into giving the AI better context to work it and should be added.

so <short description of scene>,<list of tags seperated by commas> is a pretty good combo. The way this gets translated, the words at the beginning of the prompt actually will have a bigger influence, so it might be worth it to actually start with the tags and then add the sentence. A list of only tags can works too though and what actually works best depends on the subject metter, so some experimentation is necessary.
Also don't separate your lines with empty space, hit enter only when you're at the end of a prompt then delete the space generated. It'll waste your time by making random shit. Do this only if you want to see what it spits out.
 
Do commas have any effect on the prompt? I don't know if there's a difference between a long sentence versus strings demarcated by commas (tags?).
I personally had better results by using just tags to get a "finished" image, by specifying certain details for an image, but using sentences or phrases can help when you're roughly sketching out concepts or ideas. This is just my impression, though, I can't speak on how much of a genuine effect it has on the model's inner workings.

Sometimes things are just a crapshoot, too. Maybe it thought that having "loaf of bread" near "backpack" meant a bread backpack. Even a good set of tags or precise settings can spit out some bizarre things. So you just have to keep generating until it works.
Here's my first and second generations with a couple tags, though my settings are slightly different (28 steps, scale 11). The first looks more like a pickle than bread, AI only knows why.
schoolgirl, backpack, loaf of bread in mouth, running, road, street, city, s-823071583.pngschoolgirl, backpack, loaf of bread in mouth, running, road, street, city, s-4247626261.png
 
Not sure about docker, this is what I used to install diffusion, doesn't require you build it yourself.
I have no idea what docker is or how to use it. Is it easy to install this thing through that?
Here. It's retard proof

 
View attachment 3748698
>be me
>hear about the AI art leak
>won’t this just lead to coomers making increasingly bizarre porn until a singularity is reached?
>check out /d/ to see if it’s already happening
>it is
This is actually happening because the AI was trained on 512x512 resolution pictures and different formats can but don't necessarily work well as the AI basically "forgets" what it was doing midway through the picture. The current webui has a workaround for this.
 
Trying this out and the art's a little surreal. Is my prompt too generic or does NovelAI need more training?

View attachment 3748557
Positive prompt:
Code:
Schoolgirl running down a street wearing backpack with a loaf of bread in mouth

Negative prompt:
Code:
lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry

Sampling steps: 20
Sampling method: Euler a
CFG scale: 12
Seed: -1

Do commas have any effect on the prompt? I don't know if there's a difference between a long sentence versus strings demarcated by commas (tags?).

You need to use Danbooru tags demarcated with commas to use NovelAI effectively. Go to any SD general on 4chan for examples of how tagging should work

This is a general template for how tag order should be arranged for solo focus images (like porno)
masterpiece, best quality.
<POSITION PROMPT>
<DESCRIBE SUBJECT>.
<DESCRIBE EMOTION>.
<DESCRIBE OUTFIT>.
<OTHER THEMES+CUM PROMPT>.
<LOCATION>.
 
Back