Stable Diffusion, NovelAI, Machine Learning Art - AI art generation discussion and image dump

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Hopefully people will get better at critical thinking because of all the AI stuff. There's certain details that they can't replicate perfectly, like the stuff in the background of pictures.
Maybe in the very long term if current dysgenic effects are weakened
In the short term, it just means that people who are worse at critical thinking will be worse off than people who are good at critical thinking
 
Maybe in the very long term if current dysgenic effects are weakened
In the short term, it just means that people who are worse at critical thinking will be worse off than people who are good at critical thinking
Boomers are the most dysgenic because they're falling the hardest for AI art on facebook.
 
What would you all use to train Flux.1, especially if its the Dev version and your using Ubuntu Linux OS? And what are the steps you use to train or fine tune your flux models?

I've tried using simpletuner, x-flux and ai_toolkit but I'm having difficulties fine tuning Flux.1 using these programs.
 
What would you all use to train Flux.1, especially if its the Dev version and your using Ubuntu Linux OS? And what are the steps you use to train or fine tune your flux models?

I've tried using simpletuner, x-flux and ai_toolkit but I'm having difficulties fine tuning Flux.1 using these programs.
I've heard good things about Joy Caption if you are training Flux Dev. based on prompting. Need something to run it on though.

Funny story it's made by the guy who came out with the first e621 model for 1.5 that was any good. He nuked it and the discord a couple months later because he pretty much had a "what have I done?" after some rando uploaded a couple GB of AI generated inflation porn as his first few posts after joining. This was before Civit.ai was mainstream and right after FA came out with their AI policy. You essentially had to find hyper-networks and checkpoints through Pastebins on 4chan back in the day it's amazing how we've come in a year or two.
 
Last edited:
  • Informative
Reactions: Overly Serious
People already couldn't psychologically deal with the vastness and amount of information on modern social media pre-AI. I doubt they will ever catch up. The insidious thing about newer social media is that people don't really have direct, prolonged exchanges with a smaller group of people like in this forum, but communicate their opinions at a potentially huge and everchanging faceless audience, a void with usually no clear exchange taking place. Even a current LLM can easily slip into that without getting detected or making it's weaknesses known, *if* you set it up well. If the social media platform was in on it, you could surround singular people completely by LLMs and influence their opinion and they would probably never even know. This is not because people are so dumb but because platforms like Twitter are so fundamentally unsuited for proper, meaningful exchanges. This is by design.

So I finally got around to play a bit with flux dev, it seems to me like the "meta" is to make loras. So far I didn't even get the strong impression it is *a lot* better than schnell, and that is interesting. Will experiment some more.
 
News hams (they got into @Null's mustard stash)
News hams (ladies edition)
1.jpg2.jpg3.jpg4.jpg5.jpg6.jpg7.jpg8.jpg9.jpg
Untitled.png
edit: Why didn't I think of this first?
TTD1.jpgTTD2.jpgTTD3.jpgTTD4.jpgWHJ1.jpgWHJ2.jpgWHJ3.jpgWHJ4.jpg
 
Last edited:
This c++/c implementation here will let you run Stable Diffusion and Flux on any GPU that supports Vulkan (almost every GPU made in the last 20 years,) as well as on CPU: https://github.com/leejet/stable-diffusion.cpp
This is because it uses ggml, the same AI library that llama.cpp uses. So now poorfags with older AMD GPUs have an option.
Does it allow for multi GPU setups like Illama.cpp
 
Does it allow for multi GPU setups like Illama.cpp
I believe it does, though I haven't tried it. I have a multi-GPU rig that I can't use very often due to it being extremely loud (server GPUs require powerful fans,) so I will try to test it at the next available opportunity.

I will also inspect to see if it has software-level FP16 implementation, since ggml supports 8-bit and lower quantizations of weights. If it does, this would make inference with FP16 checkpoints possible on hardware that doesn't natively support FP16.

For Windows and Mac users, there is a UI for it called Jellybox: https://jellybox.com/
There is no UI for Linux yet, but I may try my hand at making one if I can find the time.
 
Last edited:
  • Informative
Reactions: Vecr
I believe it does, though I haven't tried it. I have a multi-GPU rig that I can't use very often due to it being extremely loud (server GPUs require powerful fans,) so I will try to test it at the next available opportunity.

I will also inspect to see if it has software-level FP16 implementation, since ggml supports 8-bit and lower quantizations of weights. If it does, this would make inference with FP16 checkpoints possible on hardware that doesn't natively support FP16.

For Windows and Mac users, there is a UI for it called Jellybox: https://jellybox.com/
There is no UI for Linux yet, but I may try my hand at making one if I can find the time.
(have to doublepoost cause of the stupid edit time limit)
stable-diffusion.cpp doesn't support multi-GPU inference yet, sadly: https://github.com/leejet/stable-diffusion.cpp/issues/377
Hypothetically it would be possible to run multiple subprocesses, one for each GPU in your system, in order to speed up generation. This obviously doesn't fix the problem of splitting weights across multiple GPUs though.
 
  • Informative
Reactions: Vecr
New uncensored chinese AI video generator dropped, /pol/ going crazy with it

Instructions:
1. Enter your prompt.
2. Click “Let’s Go!”
3. Tip: Reports suggest using moonrunes (Chinese/Japanese characters) might improve accuracy.

Some amazing ones I found








 
Last edited:
How do I get comfyUI to use 2 of my nvidia rtx gpus instead of just 1?

Both of my gpus worked before when I was training stable diffusion 1.5

Also I found this .json workflow to drag into comfyui to train flux.1 dev if anyone is interested:
https://civitai.com/models/713258/flux-lora-trainer-on-comfyui

and articles to use workflow below:

Less of a hassle to get started, though it may take time with larger images 1024 x 1024 of course.
 
this time, it's personal
Since LLMs took off I saw several posts in various places that basically amounted to "I have a ton of chatlogs, how can I resurrect my dead friend/spouse/family member with this?" It's kinda ghastly and probably unhealthy and as soon as this stuff works better, people are probably really going to do this. Hollywood's probably already taking notes. James Earl Jones, voice of Darth Vader, died recently and AFAIK he gave rights to I think Disney to reproduce his voice with tech like this, forever. (don't remember the details but something like that)
 
Back