Stable Diffusion, NovelAI, Machine Learning Art - AI art generation discussion and image dump

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Cows :)
1.jpg2.jpg3.jpg4.jpg5.jpg6.jpg
 
The legend of Half-dude lives on.
Ladies and gentlemen. I must apologize. I find myself unable to produce anthropomorphic automobiles to an acceptable standard.

This is not due to lack of effort, and in theory it is possible to do, but the technology isn't there yet aside from creating vehicles with realistic gaping assholes.
 
  • Like
Reactions: Barry Williams
Ladies and gentlemen. I must apologize. I find myself unable to produce anthropomorphic automobiles to an acceptable standard.

This is not due to lack of effort, and in theory it is possible to do, but the technology isn't there yet aside from creating vehicles with realistic gaping assholes.
I almost find that hard to believe after seeing all the horrors people make with the pony diffusion models. Something in that cesspit can do it with some taggotry autism.
 
  • Agree
Reactions: Baraadmirer
I almost find that hard to believe after seeing all the horrors people make with the pony diffusion models. Something in that cesspit can do it with some taggotry autism.
I stated it could be done, it's just not to what I believe is an acceptable standard of fuckable ground vehicles.

With an image set of 30 or so pictures, a Lora could be trained: however, it appears as though nobody is capable of finding this type of art of acceptable quality. Or they have, and the one guy who cornered the Niche asked them to take it down. The later happens quite a lot.
 
Can someone do me a solid? I am without my usual image generators and don't feel like signing up for Midjourney for just one image. I've tried Grok and DALLE 3 via ChatGPT and they've been no luck. Grok? More like crock. A Crock of shit!

Any who, I need an ai image recreating the poster to The Brain That Wouldn't Die:

The Brain that Wouldn't Die JPEG.jpg

I plan to paste another face over the head but that would ruin what I'm working on. I can't give out money but I'll give a sticker.
 
Hunyuan image to video got released
Any info about whether or not it's possible to run it on consumer GPU's? They recommend 80GB GPU's which only exist in the enterprise hardware sphere, or the home lunatic sphere where you NVLink four 3090's together.
 
Any info about whether or not it's possible to run it on consumer GPU's? They recommend 80GB GPU's which only exist in the enterprise hardware sphere, or the home lunatic sphere where you NVLink four 3090's together.
You can run Hunyuan on as low as 8-12gb with specific workflows, it seems like you need at least 24 for the img2video model at the moment but I expect that to go down as people do their magic
 
With DirectML support you could run it on CPU and system memory, but it's not going to be fast.
 
Excuse me that's three 3090s and a 3070
The largest capacity PCIe GPU that Nvidia makes is the H100 96GB, which would correspond to the capacity of four 3090's, but not the TDP where a single H100 has 700W and four 3090's have 1400W, and the price where the H100 will cost you an arm and a leg, while you may be able to get four 3090's for the MSRP of a single 5090. There's still the A100 80GB as well as H100 80GB, but, again, enterprise tier GPU's.

It's worth noting those have NVLink, which Nvidia removed from consumer GPU's with the 40 series. Enterprise clients get a handful of those 80/96GB GPU's and chain them together to get massive VRAM pools, yet Nvidia refuses to give more than 32GB to their most expensive consumer models that cannot be chained, and everything besides that is 16GB or lower. In case someone thought that Nvidia doesn't give you more VRAM in fear of cutting into their enterprise offer. It wouldn't, they could give you 96GB in a 5090 and it wouldn't mean much when you can't chain it together. They're just greedy.
 
  • Like
Reactions: Dawdler
The largest capacity PCIe GPU that Nvidia makes is the H100 96GB, which would correspond to the capacity of four 3090's, but not the TDP where a single H100 has 700W and four 3090's have 1400W, and the price where the H100 will cost you an arm and a leg, while you may be able to get four 3090's for the MSRP of a single 5090. There's still the A100 80GB as well as H100 80GB, but, again, enterprise tier GPU's.

It's worth noting those have NVLink, which Nvidia removed from consumer GPU's with the 40 series. Enterprise clients get a handful of those 80/96GB GPU's and chain them together to get massive VRAM pools, yet Nvidia refuses to give more than 32GB to their most expensive consumer models that cannot be chained, and everything besides that is 16GB or lower. In case someone thought that Nvidia doesn't give you more VRAM in fear of cutting into their enterprise offer. It wouldn't, they could give you 96GB in a 5090 and it wouldn't mean much when you can't chain it together. They're just greedy.
Most applications communicate over PCIe lanes that use large memory pools from multiple GPUs. I use Illama.cpp to get 36GB from a 3090/3080 running a 70B with a 2IQ quant.
 
  • Thunk-Provoking
Reactions: BrunoMattei
Are there any image generation models that run locally and work on AMD GPUs? I've always been Team Red and everything's that happened this year hasn't persuaded me into changing that. Hell, I'll even give Intel a chance seeing how their GPUs are better than their CPUs now.
 
Back