Stable Diffusion, NovelAI, Machine Learning Art - AI art generation discussion and image dump

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
I am kinda sick of the generic anime art style no.42069
We'll see how this plays out.
Animation will mostly stay due to the stress and team work that it would take

Fun facts your can make frames used in animations it use same picture seed then up the variation strength, batches of hundreds of cell shaded pictures of same character doing walking much like in traditional flip book used in old animation styles, right now I'm testing and training my AI to copy big names and train it of drawing useless appendages and then some. What took my AI to make batches of pictures, cell shading improved from 1 hour to 4 minutes with very specific keywords.

Like any good abomination of nature you have to train yours well it will take some time.

I'm using a booru which uses 640k pictures and that's limited, private boorus limited only to the AI for to use would improve your AI to develop even faster.

With enough time AI will do animation with little training, we are witnessing only the dawn of AI first version of it.

And it is beautiful when you see it all without the jank.
 
Fun facts your can make frames used in animations it use same picture seed then up the variation strength, batches of hundreds of cell shaded pictures of same character doing walking much like in traditional flip book used in old animation styles, right now I'm testing and training my AI to copy big names and train it of drawing useless appendages and then some. What took my AI to make batches of pictures, cell shading improved from 1 hour to 4 minutes with very specific keywords.

Like any good abomination of nature you have to train yours well it will take some time.

I'm using a booru which uses 640k pictures and that's limited, private boorus limited only to the AI for to use would improve your AI to develop even faster.

With enough time AI will do animation with little training, we are witnessing only the dawn of AI first version of it.

And it is beautiful when you see it all without the jank.
Some anon made this a while back on 4chan

 
watching shitty artists seethe over ai art is very fun. being an artist does not make a living unless you're really good and willing to draw ANYTHING your commissioner tells you.

is there a public model/mirror of this running anywhere? i have poopoo pc :(
 
It's reassuring to know that not even AI can draw the eldritch abomination known as hands. And quite amusing to see the sheer amount of copium wafting from this thread about the 'true meaning' of art and how infant-stage technology can never advance to a point where it would be better than it is now. and that one weirdo who thinks AI generated art is going to come to life and rape their nephews

Sheer laziness and incompetence about how to use img2img is really all that's saving the world from a waifu-d or Halloween Allanon avatar, now. Perhaps that's for the best. He cannot be more waifu than he already is.
 
This is not gonna be good for kids. Very slight editing in paint 3d and i have this after about 30 minutes;

1666340799995.png
 
Last edited by a moderator:
watching shitty artists seethe over ai art is very fun. being an artist does not make a living unless you're really good and willing to draw ANYTHING your commissioner tells you.

is there a public model/mirror of this running anywhere? i have poopoo pc :(
Your best bet, if you want to run it on a cloud system, is probably using Paperspace, which /sdg/ recommends over just using Google's Colab. Here's a guide they have for it, though I haven't tried it myself.
 
  • Feels
Reactions: Neurotypical Mantis
I am kinda sick of the generic anime art style no.42069
We'll see how this plays out.
Animation will mostly stay due to the stress and team work that it would take
Remember to use by (studio/name here) to get less generic shit. Also experiment different types of shading methods some artists never did shading so here's a Kiwi bird drawn by Ken Sugimori for examples be VERY descriptive I forgot to type in brown feet.
So here's content for gamefreak to steal in their next two games that are shat out yearly now.
IMG_20221021_113421_064.jpg
Angry Kiwi bird attacking you boob thirsting Degenerates and he glows
IMG_20221021_145851_123.jpg
Halloween Avatar
Using following text
a Cell shaded green glow with a smiley face and a thumbs up sign reading GLOWING INTENSIFIES in the under it
IMG_20221021_145810_882.jpg

AI can't process English just literal moonrunes and kanji
 
Last edited:
I Just wanted to share a few of my outputs after having spent a few hours with several models. It seems like the AI can compose complex scenes relatively well but if you want to get the details right you would still have to get hands-on with the images while lower-detail scenes like anime seem to be much more consistent in their results.

1.png
2.png
3.png
4.png
5.png
6.png
 
I'm not sure if this has already been mentioned in this thread or elsewhere, but I could absolutely see this opening up a market for people with the starting cash for the required hardware to make custom models for clients specialized in whatever very specific subject they want, be it legal or illegal. Hell, it probably already exists. That's just part of new tech developing though. People are going to use new technology for whatever they can get away with be it legal or not. Only way to even attempt to prevent them would be to heavily restrict it for everyone which we've seen time and time again to not work in the slightest. It's just how the cookie crumbles. Just some thoughts on the subject.
 
This is a very good reason to keep your social media presence to an absolute media and never upload pictures of yourself lest you risk people making fake pictures of you. I wouldn't be mad if this killed social media for good but it's still kind of worrying this kind of technology could easily be used to frame people for crimes they didn't commit.
 
  • Agree
Reactions: GoPro and NoReturn
Stable Diffusion cucks and plans to follow in "Open"AI's footsteps and censor models to prevent them from being used to "hurt people":

Why the Future of Open Source AI is So Much Bigger Than Stable Diffusion 1.5 and Why It Matters to You​

Daniel Jeffries
15 hr ago

1666368082018.png

The release of Stable Diffusion unleashed a tremendous amount of innovation in an incredibly short period of time.

But there is a reason we've taken a step back at Stability AI and chose not to release version 1.5 as quickly as we released earlier checkpoints. We also won't stand by quietly when other groups leak the model in order to draw some quick press to themselves while trying to wash their hands of responsibility.

We’ve heard from regulators and the general public that we need to focus more strongly on security to ensure that we’re taking all the steps possible to make sure people don't use Stable Diffusion for illegal purposes or hurting people. But this isn't something that matters just to outside folks, it matters deeply to many people inside Stability and inside our community of open source collaborators. Their voices matter to us. At Stability, we see ourselves more as a classical democracy, where every vote and voice counts, rather than just a company.

In the absence of news from us, rumors started swirling about why we didn't release the next version yet. Some folks in the community worry that Stability AI has gone closed source and that we'll never release a model again. It's simply not true. We are committed to open source at our very core.

Open source is where innovation comes from and there is a long history with software like Linux that shows it's the best way to deliver value to society as a whole. While closed AI systems have seen very limited use cases, innovators and entrepreneurs have already woven Stable Diffusion into an amazing array of potentially game-changing applications for American business such as prototype synthetic brain scan images that can drive medical research, on demand interior design, incredibly powerful Hollywood style film effects, seamless textures for video games, new kinds of rapid animation that can drive tremendous new streaming content, on-the-fly animated videos and books, concept art, plugins for Figma and Photoshop, and much more. Openness works because no single company or person can imagine all the possible ways to use a brilliant new technology.

Others worry that we plan to neuter the models to the point of uselessness and try to chase every edge case. We understand that it's impossible to solve for every edge case but we don't need to do that. What we do need to do is listen to society as a whole, listen to regulators, listen to the community.

We are forming an open source committee to decide on major issues like cleaning data, NSFW policies and formal guidelines for model release. This framework has a long history in open source and it's worked tremendously well. Open source AI needs to be guided by the same democratic principles. We also announced a prize of $200,000 for deep fake detection and we will release that software open source and free of charge to help society combat this abuse of machine learning.

So when Stability AI says we have to slow down just a little it's because if we don't deal with very reasonable feedback from society and our own communities then there is a chance open source AI simply won't exist and nobody will be able to release powerful models. That's not a world we want to live in.

We believe in open source AI. Help us set the groundwork to make sure we have a firm foundation to build on, so we and everyone else can release models that matter now and in the future.

Help us make AI truly open, rather than open in name only.
Source (Archive)

The blog post makes me think they got threatened, specifically:
Others worry that we plan to neuter the models to the point of uselessness and try to chase every edge case. We understand that it's impossible to solve for every edge case but we don't need to do that. What we do need to do is listen to society as a whole, listen to regulators, listen to the community.
So when Stability AI says we have to slow down just a little it's because if we don't deal with very reasonable feedback from society and our own communities then there is a chance open source AI simply won't exist and nobody will be able to release powerful models. That's not a world we want to live in.

The README for Stable Diffusion 1.5 also contains a lot of woke crap:

Misuse, Malicious Use, and Out-of-Scope Use​

Note: This section is taken from the DALLE-MINI model card, but applies in the same way to Stable Diffusion v1.

The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
That bit is literally copied from OpenAI.

Misuse and Malicious Use​

Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
  • Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
  • Intentionally promoting or propagating discriminatory content or harmful stereotypes.
  • Impersonating individuals without their consent.
  • Sexual content without consent of the people who might see it.
  • Mis- and disinformation
  • Representations of egregious violence and gore
  • Sharing of copyrighted or licensed material in violation of its terms of use.
  • Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.

Bias​

While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion v1 was trained on subsets of LAION-2B(en), which consists of images that are primarily limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.

Safety Module​

The intended use of this model is with the Safety Checker in Diffusers. This checker works by checking model outputs against known hard-coded NSFW concepts. The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter. Specifically, the checker compares the class probability of harmful concepts in the embedding space of the CLIPTextModel after generation of the images. The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.

Environmental Impact​

Stable Diffusion v1 Estimated Emissions Based on that information, we estimate the following CO2 emissions using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.

  • Hardware Type: A100 PCIe 40GB
  • Hours used: 150000
  • Cloud Provider: AWS
  • Compute Region: US-east
  • Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid): 11250 kg CO2 eq.
Source (Archive)
From the license (Archive):
Attachment A

Use Restrictions

You agree not to use the Model or Derivatives of the Model:
- In any way that violates any applicable national, federal, state, local or international law or regulation;
- For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;
- To generate or disseminate verifiably false information and/or content with the purpose of harming others;
- To generate or disseminate personal identifiable information that can be used to harm an individual;
- To defame, disparage or otherwise harass others; - For fully automated decision making that adversely impacts an
individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation;
- For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics;
- To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
- For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories;
- To provide medical advice and medical results interpretation;
- To generate or disseminate information for the purpose to be used for administration of justice, law enforcement, immigration or asylum processes, such as predicting an individual will commit fraud/crime commitment (e.g. by text profiling, drawing causal relationships between assertions made in documents, indiscriminate and arbitrarily-targeted use).
 
Last edited:
My wife gave me some prompt suggestions.
.image 20.pngimage.png


After an hour, I think it's trying to communicate.

I messed up and put ALL the negative prompts like this [[[uglymishapen_twistedeuropean_features_asian_features_6_fingers_three_arms_four_arms]]] and well..
.2022-10-21-12-35-41-13-uglymishapen_twistedeuropean_features_asian_features_6_fingers_three_ar...png2022-10-21-12-36-56-16-uglymishapen_twistedeuropean_features_asian_features_6_fingers_three_ar...png2022-10-21-12-34-50-11-uglymishapen_twistedeuropean_features_asian_features_6_fingers_three_ar...png2022-10-21-12-35-15-12-uglymishapen_twistedeuropean_features_asian_features_6_fingers_three_ar...png
 
Stable Diffusion cucks and plans to follow in "Open"AI's footsteps and censor models to prevent them from being used to "hurt people":
They're putting restrictions on their AI outputting porn.

This AI that was supposed to put all the degenerate furfags out of business is now actively protecting them.
 
Who cares? Within minutes, there'll be 30 youtube tutorials on how to disable this censorship anyway.
They might just do what GIMP does and encrypt the code that restricts it.

This'll be interesting.

After all, how do they expect to control what anybody does with this software if they don't? Wouldn't want any mean wHiTe SuPrEmAcIsts making lynch mob photos for a laugh.
 
Back