Stable Diffusion, NovelAI, Machine Learning Art - AI art generation discussion and image dump

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
If you need to spend $200 a month, I would just build a GPU machine and throw it behind a reverse proxy with authentication. On used Broadwell hardware and a used RTX 3090, you could have something decent up and running under $1.2k. That's only a 6 month ROE. Sure the Sora stuff you would need another .7k to add another rtx3090, but it's still doable, and let's be honest running an uncensored model is worth more anyway.
 
  • Like
Reactions: BrunoMattei
It looks like Sora is a huge shit show because the censorship went into overboard. You can't do anything that even vaguely implies nudity (like say, legs or a bathing suit). Keep in mind that Sora has a $200 per month subscription.
Another warning for would-be Sora users:

Ten months after first tease, OpenAI launches Sora video generation publicly (archive)
OpenAI says it maintains an active monitoring system and conducted testing to identify potential misuse scenarios before release.
OpenAI implemented several safety measures in the release. The platform embeds C2PA metadata in all generated videos for identification and origin verification. Videos display visible watermarks by default, and OpenAI developed an internal search tool to verify Sora-generated content.
You should assume that any video you make can eventually be associated with the name on the credit card you used, by the company at least.

 
  • Like
Reactions: BrunoMattei
 

Attachments

  • ComfyUI_00365_.png
    ComfyUI_00365_.png
    1.4 MB · Views: 33
  • ComfyUI_00359_.png
    ComfyUI_00359_.png
    1.5 MB · Views: 32
  • ComfyUI_02523_.png
    ComfyUI_02523_.png
    1.2 MB · Views: 35
  • ComfyUI_01689_.png
    ComfyUI_01689_.png
    1.1 MB · Views: 33
  • ComfyUI_01690_.png
    ComfyUI_01690_.png
    1.6 MB · Views: 32
  • ComfyUI_01513_.png
    ComfyUI_01513_.png
    1.1 MB · Views: 34
  • ComfyUI_01302_.png
    ComfyUI_01302_.png
    1.9 MB · Views: 30
  • ComfyUI_00212_.png
    ComfyUI_00212_.png
    1.4 MB · Views: 27
  • ComfyUI_00181_.png
    ComfyUI_00181_.png
    1 MB · Views: 26
  • ComfyUI_00148_.png
    ComfyUI_00148_.png
    611.8 KB · Views: 28
  • ComfyUI_00080_.png
    ComfyUI_00080_.png
    1.7 MB · Views: 35
  • ComfyUI_00008_.png
    ComfyUI_00008_.png
    1.2 MB · Views: 35
Last edited:
 

Attachments

  • ComfyUI_01328_.png
    ComfyUI_01328_.png
    1.6 MB · Views: 30
  • ComfyUI_01307_.png
    ComfyUI_01307_.png
    1.8 MB · Views: 34
  • ComfyUI_01283_.png
    ComfyUI_01283_.png
    1.7 MB · Views: 32
  • ComfyUI_01223_.png
    ComfyUI_01223_.png
    1.5 MB · Views: 31
  • ComfyUI_00309_.png
    ComfyUI_00309_.png
    1.5 MB · Views: 27
  • ComfyUI_00287_.png
    ComfyUI_00287_.png
    1.5 MB · Views: 23
  • ComfyUI_01341_.png
    ComfyUI_01341_.png
    1.7 MB · Views: 22
  • ComfyUI_02395_.png
    ComfyUI_02395_.png
    1.2 MB · Views: 26
  • ComfyUI_00197_.png
    ComfyUI_00197_.png
    939.3 KB · Views: 23
  • ComfyUI_00837_.png
    ComfyUI_00837_.png
    2 MB · Views: 29
  • ComfyUI_01677_.png
    ComfyUI_01677_.png
    1.9 MB · Views: 23
  • ComfyUI_FaceDetailer_00002_.png
    ComfyUI_FaceDetailer_00002_.png
    1.3 MB · Views: 31
  • ComfyUI_00088_.png
    ComfyUI_00088_.png
    1.3 MB · Views: 31
  • ComfyUI_00307_.png
    ComfyUI_00307_.png
    1.6 MB · Views: 30
  • ComfyUI_00268_.png
    ComfyUI_00268_.png
    1.8 MB · Views: 26
For all of you working in ComfyUI, do you find it generates images faster than Forge or AUTOMATIC1111?
 
For all of you working in ComfyUI, do you find it generates images faster than Forge or AUTOMATIC1111?
AUTOMATIC1111 hasn't been updated in 4 months or so. It's deader than disco
I haven't used Forge, but from what I see from other users, it takes roughly the same amount of time as Comfy to generate the images (assuming same checkpoints, etc).
 
For all of you working in ComfyUI, do you find it generates images faster than Forge or AUTOMATIC1111?
I (begrudgingly) jumped ship to Comfy not that long ago because A1111 went MiA and generation times are greatly improved. Forge should have the same speeds as Comfy since I believe they use the same techniques to speed up generation. You're not missing out by not using Comfy unless you want the greater degree of control it provides, since generation times are the same, but if you're just looking to drop A1111 for something faster but similar just use Forge. It's basically A1111 but better now, but not long ago it was faster at the expense of less plugin support- and now they're ahead, since A1111 is seemingly on hiatus.
 
but if you're just looking to drop A1111 for something faster but similar just use Forge. It's basically A1111 but better now, but not long ago it was faster at the expense of less plugin support- and now they're ahead, since A1111 is seemingly on hiatus.
This. Just today I swapped my A1111 setup for Forge and it's already a notable improvement in generation speed, image sizes and memory usage even though I'm using an older GPU (GTX1080). As an added bonus all of the XL/Turbo based models that either couldn't be loaded or performed so terribly with A1111 I saw no point in trying work just fine. I'm not going to delete my A1111 setup just in case shit breaks but it's not likely to see any further use.
 
I still like automatic because of the extensions. Region based promoting is something I never got working on comfy, but I was able to figure it out on automatic with a plugin.
 
'Shockingly real' AI Santa is free to use, will put Mall Santas everywhere out of jobs
Tavus has created a new AI-powered Santa at santa.tavus.io so you can have what it says is a 'shockingly real' conversation with Santa without going to see a mall Santa. The AI Santa is backed by Y Combinator, Sequoia, and Peter Thiel. While the call is free, there's bound to be a monetization scheme somewhere behind this computerized Santa, and it does require your email address. However, AI Santa uses the Conversational Video Interface (CVI) from Tavus, and it is incredibly observant — it even commented on items in my room.
 
So now that I'm using Forge instead of A1111 and can actually load and use XL models, can anyone recommend models that are good general use models for anything? Photorealism is not required or really desired as I try to avoid making anything that could be mistaken for actually existing, and Flux just isn't happening as I don't have the hardware for it.
 
So now that I'm using Forge instead of A1111 and can actually load and use XL models, can anyone recommend models that are good general use models for anything? Photorealism is not required or really desired as I try to avoid making anything that could be mistaken for actually existing, and Flux just isn't happening as I don't have the hardware for it.
Pony's pretty good for anime/cartoon styles if you don't mind that it has the potential to produce genitalia and titties (which, of course, can be negatively prompted). If you happen to be using v6, just make sure you append score_9, score_8_up, score_7_up, score_6_up, score_5_up, score_4_up in the prompt (there's an article about why).

Still struggling to make LoRAs for SDXL/Pony locally. Anyone know of any good Google Colabs I can use? CivitAI trainer also seems to be helpful, but that costs Buzz.
 
Is there a way to make 80's dark fantasy AI edits without paying for stable diffusion? I've wanted to make some of Fishtank season 2.5's cast for awhile now because it would be fucking funny.
This is what inspired me.
 
Is there a way to make 80's dark fantasy AI edits without paying for stable diffusion?
What do you mean by "paying for stable diffusion"? You can get the checkpoints for free on the Internet (HuggingFace.co should still have them), and the web UIs Forge, ComfyUI, and AUTOMATIC1111 are also available free of charge.
 
Pony's pretty good for anime/cartoon styles if you don't mind that it has the potential to produce genitalia and titties (which, of course, can be negatively prompted). If you happen to be using v6, just make sure you append score_9, score_8_up, score_7_up, score_6_up, score_5_up, score_4_up in the prompt (there's an article about why).

Still struggling to make LoRAs for SDXL/Pony locally. Anyone know of any good Google Colabs I can use? CivitAI trainer also seems to be helpful, but that costs Buzz.
ASCAS recommended NoobXL over pony and I see why. Good shit. And it's compatible with Pony and SDXL LORAs.

Illustrious is pretty good as well. No idea what Pony V7 is gonna look like, but that's for mid 2025 and most machines won't even be able to run it.
 
What do you mean by "paying for stable diffusion"? You can get the checkpoints for free on the Internet (HuggingFace.co should still have them), and the web UIs Forge, ComfyUI, and AUTOMATIC1111 are also available free of charge.
Shit, I actually meant Midjourney. I couldn't find an image-to-image model on HuggingFace.
 
I couldn't find an image-to-image model on HuggingFace.
All models/checkpoints for Stable Diffusion are capable of both text to image and image to image generations, though if you're running it off of some online service instead of your own machine you generally don't get access to img2img.
 
All models/checkpoints for Stable Diffusion are capable of both text to image and image to image generations, though if you're running it off of some online service instead of your own machine you generally don't get access to img2img.
Ok this is the part where I act like a boomer and say "how do I install it on my own machine?"
 
Ok this is the part where I act like a boomer and say "how do I install it on my own machine?"
I recommend running Windows 10+, having at least 16 GB of RAM and an Nvidia GPU with minimum 8GB VRAM, though depending on the models used you can get away with less. If your system meets those specs, then the Forge WebUI can be found here, and a combined package of everything you need can be downloaded here. Update.bat will update the Forge WebUI, run.bat will run Stable Diffusion. Any checkpoints/models you wish to use are placed in webui -> models -> Stable-diffusion.
 
Back