Stable Diffusion, NovelAI, Machine Learning Art - AI art generation discussion and image dump

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
I'm using Artbreeder since like 4 years - long before the AI craze started. I can highly recommend it.
The problem with any non-copilot Ai is that they're retarded and don't go anywhere near what I want. For example:
Prompt said:
VIKING, humonculus, cartoonish, thin long arms and legs, big nose, thick outlines, cartoonish vectorized, pronounced contour, in the style of German cartoons, colored, white background
1714991291892.png
1714991352489.png
 
man this looks like shit
look, i get people may disagree with me, but consider the following: ive seen what the ai can do now. it isnt completely perfect, but compared to the video above, its way better. you think it looks impressive this, but it pales in comparison to the sora stuff. that is actually impressive compared to heads moving slightly and maybe some mouth movements. you get me something like that when sora comes out and then i might be impressed.
also i had nothing better to do today so i may as well explain my reasonings
 
The problem with any non-copilot Ai is that they're retarded and don't go anywhere near what I want.
You definitely want to find a more specialized model or LoRA as most of the more common ones are trained more on anime or photorealistic images than comic/cartoon styles.
 
  • Agree
Reactions: Mr.Miyagi
After a year of training tagging and experimentation, I finally completed a Lora for one of my favorite artists. Unfortunately, it's NSFW, so nobody will ever see the fruits of my labor and releasing it isn't ethical to the artist.

The worst part is the art it generates lacks that spark which I find attractive, it makes things that look like what the artist draws, but it lacks the essence that make their art desired somehow.

I know now more then ever, that truly distinct and talented artists who approach their craft and trade from a workman like perspective serving clients will never be made obsolete by this technology.
 
Dammit. sometime in the past week or so my python version updated and removed the 3.10.6 that it needed. Because python programs 41% if it doesn't have the exact version I'm now trying to figure out how to run parallel versions. They don't make it easy. You'd think it would be easy to have multiple versions in parallel, but I guess it's not so I'm looking at how to make pyenv work. We should have never gone beyond the abacus.
 
Dammit. sometime in the past week or so my python version updated and removed the 3.10.6 that it needed. Because python programs 41% if it doesn't have the exact version I'm now trying to figure out how to run parallel versions. They don't make it easy. You'd think it would be easy to have multiple versions in parallel, but I guess it's not so I'm looking at how to make pyenv work. We should have never gone beyond the abacus.
fixed it. Tried a few things and was pulling my hair out.
tried using pyenv at first, couldn't figure that out or get it it work.
Installed python 3.10 via aur but couldn't figure out how to make it the primary install.
confirmed it was there in /usr/lib/python3.10

Took a break, searched some more and checked that I should edit the web-ui.sh in there was this

Python:
# python3 executable
if [[ -z "${python_cmd}" ]]
then
    #python_cmd="python3"
    python_cmd="python3.10"

added in the last line there and re-ran. It failed with a different error code. Added that python in the webui-user.sh as well then it seemed to run.
In any case here is the representation of my frustrations today
00006-1520653298.png00007-1520653299.png00008-1520653300.png00009-1520653301.png
 
OpenAI’s flawed plan to flag deepfakes ahead of 2024 elections (archive)
Of course, this solution is not comprehensive because that metadata could always be removed, and "people can still create deceptive content without this information (or can remove it)," OpenAI said, "but they cannot easily fake or alter this information, making it an important resource to build trust."

Because OpenAI is all in on C2PA, the AI leader announced today that it would join the C2PA steering committee to help drive broader adoption of the standard. OpenAI will also launch a $2 million fund with Microsoft to support broader "AI education and understanding," seemingly partly in the hopes that the more people understand about the importance of AI detection, the less likely they will be to remove this metadata.
Imagine being "educated" by M$ about AI and proudly metadatacucked. Yikes!
 
So I don't like those Panavision trailers that were posted earlier. I thought it was interesting the first one I saw but they are aesthetically ugly and the joke wore off before the end of the first trailer let alone repeated across multiple.

Here is a trailer someone different did that I find visually much more appealing, Myazaki's Spirited Away.
 
been playing more with this Meshy.ai thing, I had an edible and the urge to try and recreate the Windwaker description from Tuesday's Mati.


Meshy AI only likes to make 3d models from pictures of things that are already 3d and not pictures so much

I pulled a picture of The sneeder into Stable diffusion to and said "3d Model of a head" with all the control nets dialed in to make it look right. Then I just popped that into The meshy.ai Text to image and it gave me this.
I don't think its good enough to go forward with, but you could clean up the low poly model in blender and redo the UVs and sorta have something that almost is easier than doing in manually.


Heres a download link if you want a deformed one: https://app.meshy.ai/preview?web=018f84d5-0503-7d3b-b933-3bf71dc1a322


I think the big issue is that the model is trained on heads with eyes that share the same zipcode.


1715921428674.png
1715921452402.png




1715921778752.png
1715921988122.png
 
Last edited:
I made the switch over to Forge (created by the guy who pioneered ControlNet) from A1111 a few months ago. Performance is pretty great across the board. Noticed recently (about a month ago) that Forge was no longer receiving regular updates. Turns out there is this huge autistic slapfight between the Forge/ControlNet developer and the ComfyUI dev, alleging “code theft”. Worth noting that ComfyUI has direct ties with Stability. Rate me late if this is old news here.

Forge dev has been radio silent for a while, but recently came out to announce a new project he is working on. Looks similar to IP Adapter, it allows the user to control the position of lighting. Pretty neat.

I have learned the ins and outs of ComfyUI over the past several months and now use it as my primary source of SD tinkering, but Forge has been great to play around with, especially via mobile device. Hopefully he goes back to it soon or there are some forks.
 
I made the switch over to Forge (created by the guy who pioneered ControlNet) from A1111 a few months ago. Performance is pretty great across the board. Noticed recently (about a month ago) that Forge was no longer receiving regular updates. Turns out there is this huge autistic slapfight between the Forge/ControlNet developer and the ComfyUI dev, alleging “code theft”. Worth noting that ComfyUI has direct ties with Stability. Rate me late if this is old news here.

Forge dev has been radio silent for a while, but recently came out to announce a new project he is working on. Looks similar to IP Adapter, it allows the user to control the position of lighting. Pretty neat.

I have learned the ins and outs of ComfyUI over the past several months and now use it as my primary source of SD tinkering, but Forge has been great to play around with, especially via mobile device. Hopefully he goes back to it soon or there are some forks.
I'm another ComfyUI user and prefer it over A1111. So what does Forge offer me over ComfyUI?

And in non-news, still waiting on them to release SD3. Via the API I've been impressed but it wont come close to its potential until it's let out into the wild and other people can start tweaking it.
 
  • DRINK!
Reactions: Looney Troons
I'm another ComfyUI user and prefer it over A1111. So what does Forge offer me over ComfyUI?

And in non-news, still waiting on them to release SD3. Via the API I've been impressed but it wont come close to its potential until it's let out into the wild and other people can start tweaking it.
I cannot recommend Forge or any fork of A1111 over ComfyUI unless you like to tinker around with things on your phone or iPad while being connected to your local network.

Paired with LobeHub, which is a front-end overhaul for A1111/Forge, it’s just fun to play with. Forge has lots of tweaks on the backend that come out-of-the-box which makes it much more efficient and streamlined than a typical A1111 install. Things like the Kohya HiRes fix, ControlBet, HyperTile, etc. are all included natively (without any real explanation to newcomers as to what they do).

I digress, if you’re using Comfy, you probably like to have that granular control over your generations and Forge will offer you very little unless you just want to play around with SD while lounging around.
 
Back