Stable Diffusion, NovelAI, Machine Learning Art - AI art generation discussion and image dump

Is there a name for this type of animation yet? If not I dub it "noiseimation". It's where you use img2img on an image with denoising turned down low and generate multiple outputs with different seeds then put them all together in an animation. This one I call "In the Multiverse of Chadness", made from 12 images with denoising strength at 0.15.

(click thumbnail for full size. Tiny thumbnail doesn't show the effect very well.)
View attachment 5066351
A few others have already explained it, but the original term is Boiling Lines, which refers to the way some traditional animations would redraw the outlines for stationary characters to make them feel more "lively", as opposed to feeling like cardboard cutouts when they stop moving.

Ed Edd n Eddy took full advantage of this technique back when they used cel animation, and it really helped give the show its charm.
 
A few others have already explained it, but the original term is Boiling Lines, which refers to the way some traditional animations would redraw the outlines for stationary characters to make them feel more "lively", as opposed to feeling like cardboard cutouts when they stop moving.

Ed Edd n Eddy took full advantage of this technique back when they used cel animation, and it really helped give the show its charm.
View attachment 5067669
I'm so glad the creator of Ed Edd n Eddy decided to not allow anyone to make a reboot or a successor the show. He made a great ending and finished with that. No one can corrupt it after the fact.
 
This theory is entirely halfbaked and I'm genuinely unsure of how the results will turn out, but fuck it.

-63 source images, 3050 class images (50 class images/source image)
-"[keyword] aesthetic" for source tags, "painting aesthetic" for class tags.
-Run for 150 epochs.
-Gen 1025 images using the output model.
-Retrain the model using the above generated images and the original prompting, at 15 epochs. (Note this step only uses 2 class images/source image)

Main reason I'm taking this route is because I can only find 63 hi-res scans of Douglas Schuler's early MTG art. This has led to a lot of overfitting on certain keywords, such as most female renders having the pose of the Alpha Serra Angel painting. Most of the other shit I've found has been low-res scans off the cards, which contaminates the final model with a visible rosette pattern.

sample_630-0.pngsample_5670-0.pngsample_12600-0.png
Results so far of "dgsh painting, blonde woman with glasses" at seed 1337

If this method works at replicating Douglas' style, first thing I'm doing is HD upscaling the rest of his portfolio.
I'm so glad the creator of Ed Edd n Eddy decided to not allow anyone to make a reboot or a successor the show. He made a great ending and finished with that. No one can corrupt it after the fact.
Danny Antonucci is a fucking legend, and he chose to end his show on a highnote instead of beating the dead horse like the rest of the Cartoon Cartoons block.

While I have to assume he'd be against AI given his distaste for digital animation, a lot of his style and animation decisions in Ed Edd n Eddy are gonna be really useful for people trying to get into animation with SD.
 
Last edited:
A few others have already explained it, but the original term is Boiling Lines, which refers to the way some traditional animations would redraw the outlines for stationary characters to make them feel more "lively", as opposed to feeling like cardboard cutouts when they stop moving.

Ed Edd n Eddy took full advantage of this technique back when they used cel animation, and it really helped give the show its charm.
View attachment 5067669
I think people might be misunderstanding what I meant. Yes, it superficially resembles boiling lines in that example, but that technique is done to make it look like the character is still subtly moving, while what I am talking about is each frame being from a slightly different version of the subject matter altogether, not implying motion, but reality warping. I think my mistake may be that in the example I posted I used too low a denoising value.
 
  • Like
Reactions: Psicopax
This theory is entirely halfbaked and I'm genuinely unsure of how the results will turn out, but fuck it.

-63 source images, 3050 class images (50 class images/source image)

What are “Class images”?
 
What are “Class images”?
Images generated by stable diffusion for the purpose of aiding training. Dreambooth training replaces data in the model instead of adding to it, so class images are used to wedge the new info into the existing model without diluting the rest of the model data.

Dreambooth generates these automatically per the training config, so you don’t need to worry TOO much about this.
 
Images generated by stable diffusion for the purpose of aiding training. Dreambooth training replaces data in the model instead of adding to it, so class images are used to wedge the new info into the existing model without diluting the rest of the model data.

Dreambooth generates these automatically per the training config, so you don’t need to worry TOO much about this.
Is there a limit to the amount of information you could wedge in a SD model? Or could you technically teach a model to contain every other model?

I saw some retard on Civitai making a Mega model where he just merges all models he can get his hands on, without understanding anything. Dude even added the ControlNet models. He keeps commenting on people’s models something like “Great model I’m adding this to my mega model”.
Guy even added Kadinsky… which isn’t even a Stable Diffusion format.

Here is a link if you wanna laugh:

Edit: just saw this in his description “Currently sitting at 1390 Model Merges, Im happy for other model users to use this model for mergers, it is a very powerful beast that gets results”

CLICK 2.4 TAB For PHOTOREALISM MODEL - IT WAS ASKED FOR AND IVE DELIVERED (HOPEFULLY) - 2.5 (FULL SIZE ALSO - DREAMBOOTH1.CKPT ADDED) 16.5GB DONT COMPLAIN ABOUT THE SIZE OF 2.5 - ONLY FOR THOSE THAT REALLY WANT A POWERFUL MODEL AND DONT CARE ABOUT FILE SIZE



VERSION 2.3 HAS ARRIVED, PLEASE ENJOY (CONTROLNET BAKED IN FOR IMPROVED IMAGE GENERATION + KADINSKY MODEL ADDED FOR QUALITY BOOST). For 2.3 you will need to make sure both the safesensor and yaml config file are downloaded and installed in the same directory or you wont get the proper results. this is because of multi language support built into the model based on the alt-diffusion model - mostly for chinesse, french, spanish and a few other language specific models that can work with english and vise versa.



Easy Diffusion Can we downloaded here, they are currently looking for developers who can code up contributions to merge LORA files directly into main models in an easy way, something that should be possible but automatic1111 doesnt have yet. U can add a comment on the pull reques there cmdr2/stable-diffusion-ui: Easiest 1-click way to install and use Stable Diffusion on your own computer. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image. (github.com) and download it too. Its easier to get into for newbies than other stable diffusion programs



Currently sitting at 1390 Model Merges, Im happy for other model users to use this model for mergers, it is a very powerful beast that gets results



Both fp16 versions can be found across the left tabs where versions are now listed on civitai.com after their recent website update. A 32 and 16fp sized file (yes they are big please do not clutter up comments about size , it is what it is if its to much then just pass on the model



works fine with easy diffusion UI as well - which is the program i use to run it, automatic1111 and stable diffusion WebUI are fine with the config file placed alongside the saftensor - config file will be in a light blue message on the upper right side of this page on the download screen.



To find other models search for "mega model" or search my username info.



Credit to all the model makers, merges and the community in general without which this wouldnt be possible. Hope u all enjoy it and feel free to merge it into your own models as well - im interested to see what people do with this (this is a general acknowledgement to all model producers here, because if i listed 1700 models that have been merged there wouldnt be enough space and there would be complaints about clutter) so the above is a general acknowledgement to all of civitai and huggingface model producers



My username is u/ollobrains on reddit if u have any questions u can drop me a line there - Donations can be made here Buy asmrgaming a Coffee. ko-fi.com/asmrgaming - Ko-fi ❤️ Where creators get support from fans through donations, memberships, shop sales and more! The original 'Buy Me a Coffee' Page.

He is also confused as to why the model suddenly went all “Real” with a new version. Not understanding he is just changing the content and not really adding much.
1681935763224.png
 
Last edited:
I’m not too sure about the specific limitations, but I will point out that things start breaking fast once you’re merging multiple models that weren’t trained on the same model.

This isn’t even a 1.4 vs 2.1 compatibility issue, but models need to be trained from the same parent model to get good results. So if one model is trained off of WD while the other is built off Copper, you’re not gonna get the desired results by merging them. Usually for proper merging you need two models that came from the same parent checkpoint (SD 1.5, WD, CopperMix, etc), then that parent checkpoint.
 
  • Like
Reactions: Psicopax
"I'm shoving 1700 different models together so I can generate art to use as thumbnails on my years-old Youtube ASMR Stellaris videos" is quite an interesting line of decision making.
Might be an ancient comparison, but this reminds me of Eloraam from way back in the Minecraft Beta days.

Chick would basically take features from all the other mods being developed and incorporate them into her RedPower mod. What started as an advanced redstone mod turned into "I'm adding volcanos and wind turbines because they worked for Flowerchild."

Then again, these people mashing together dozens of AI models were probably the same kids that dumped dozens of repetitive minecraft mods into their jar file back in the day.
 
"I'm shoving 1700 different models together so I can generate art to use as thumbnails on my years-old Youtube ASMR Stellaris videos" is quite an interesting line of decision making.
Never actually bothered to check his YouTube channel, so when you linked it I went to go look and I laughed so hard and loud that I scared my cat away

05690ECA-58D1-4B9E-8636-DACAE6F0F788.jpeg
D0CD20FB-34AA-4205-B65F-E684AD9A73F2.jpeg
76F22312-6417-4F7C-A68B-D816F00D219E.jpeg

I’m not too sure about the specific limitations, but I will point out that things start breaking fast once you’re merging multiple models that weren’t trained on the same model.

This isn’t even a 1.4 vs 2.1 compatibility issue, but models need to be trained from the same parent model to get good results. So if one model is trained off of WD while the other is built off Copper, you’re not gonna get the desired results by merging them. Usually for proper merging you need two models that came from the same parent checkpoint (SD 1.5, WD, CopperMix, etc), then that parent checkpoint.
Didn’t consider that, but now that you spell it out for me it makes a lot of sense. And then you have the issue of models trained with different VAE’s. How do you even start mixing those together.

I checked his preview images and it legit looks like trash. I’m excited to see this train wreck and where it’s trajectory will lead it.
 
God I love this site. Even when casually discussing technology, I'm finding more retards to laugh at.
Jesus christ. If that model was any more overtrained it'd be gunning down Sergeant Hartman in the latrine.

Didn’t consider that, but now that you spell it out for me it makes a lot of sense. And then you have the issue of models trained with different VAE’s. How do you even start mixing those together.
Ideally you don't, but a lot of the publicly released models are trained with ft-mse-84000 anyways, so it's kind of a moot point.

Also, update on my retard training theory from yesterday:
00004-572375848.png

I'll be doing some research into Native Training before I train anything off the new style dataset, since I'd like to add more fantasy environments and concepts without diluting the base model.

Like I'm a simple girl. If I want is the ability to generate Orks going on a WAAAGH against Eldrazi and Phyrexian Kiwis in the style of Erika Moen, I should be able to do so.
 
Last edited:
I unfortunately haven't had much spare time to devote to experimenting and testing the numerous models I've been hoarding, though an idea struck me when I was designing a presentation and I felt that I needed various bits of art to mostly just take up space. So instead of grabbing random shit from the internet I just fired up SD and made filler.
00076-2684529233.jpg00144-2781396083.jpg00146-2781396087.jpg
I ended up going with an entirely different theme but it was much more fun than just using image search. Models have some pretty unique takes on what chessboards are.
 
I unfortunately haven't had much spare time to devote to experimenting and testing the numerous models I've been hoarding, though an idea struck me when I was designing a presentation and I felt that I needed various bits of art to mostly just take up space. So instead of grabbing random shit from the internet I just fired up SD and made filler.
View attachment 5073356View attachment 5073357View attachment 5073358
I ended up going with an entirely different theme but it was much more fun than just using image search. Models have some pretty unique takes on what chessboards are.
If you ever need to come up with a nice tile pattern for a 3D mall or shopping center, just ask SD to try generating chess boards.

That said, I'm more surprised that it's starting to understand reflections. It's not even close to accurate, but it's making the effort.
 
  • Like
Reactions: Vecr
“A happy Jack o Lantern”
95D658C2-6325-46E5-BBC9-9E656585935B.png

Holy fucking nightmare fuel, that would actually look good on a goosebumps book cover

To be clear, the model at this stage is so fucking overtrained it’s not even funny. This was intentional, as I only have a couple dozen images to work with. The output images will be used to train the final model, which should lead to a more flexible product as mentioned earlier
 
Last edited:
Civitai is gonna be having some issues today and probably this weekend. I'll try to update this post when there is any news on it for those that aren't in their discord.

1682104072574.png

In other Civitai drama, the model of Pipkin Pippa has apparently run into some trouble because of degen preview images of loli character.

1682104722362.png

Archive

Full image:
1682104774248.png

Creator responds to Pippa post:
1682104828208.png
Archive

Links:
Archive: Archive of the Pippa lora page on Civitai
Backup of the lora: archive of the safetensor version of the Pipkin Pippa Stable Diffusion lora on Anonfiles (In case it gets removed) ((I strongly object to this being used in any NSFW way))
 
Last edited:
Back