Stable Diffusion, NovelAI, Machine Learning Art - AI art generation discussion and image dump

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
I've been playing a lot of GURPS games with friends on Tabletop Simulator, and AI slop is perfect for making character images to use as tokens. I made my account here using one from a game we recently concluded, a campaign set during the first crusade. I wanted to go for a Tyrion Lannister archetype, but horribly inbred. It only took a few photoshop alterations to get what I had in mind. It must really suck for people who would normally take commissions for this kind of thing. What would this have even cost, and how long would it have taken to get back?


View attachment 7161204View attachment 7161205
You wouldn't have paid money for this in the first place so it doesn't actually affect the art community.
 
Ars Technica: Midjourney introduces first new image generation model in over a year (archive)
AI image generator Midjourney released its first new model in quite some time today; dubbed V7, it's a ground-up rework that is available in alpha to users now.
V7 works with most parameters from previous versions (--ar, --seed, etc), including users' existing --sref codes from 6.1, as well as the recently introduced personalization feature. In fact, V7 is the first Midjourney model that has personalization enabled by default, meaning users will have to train it by picking at least 200 images to build their aesthetic profile.
GnqXiXFXAAAzLRt-768x768.jpgGnqXyQ-WkAAROjI-768x768.jpg
 
Last edited:
  • Informative
Reactions: Absurdist Laughter
Well, I've pretty much moved to Illustrious now. The way it handles prompt language is much better than 1.5, though looking for a photorealistic checkpoint is still hard.
Have you seriously still been using SD1.5 up until now? Just use SDXL or Flux if you want photoreal.
 
Have you seriously still been using SD1.5 up until now? Just use SDXL or Flux if you want photoreal.
I was concerned about VRAM issues, but with Forge I feel more comfortable with using such models even if it causes my computer to lock up for a bit. Might even think about ReForge at some point.
 
Might even think about ReForge at some point.
Speaking of, it's been discontinued as the dev's personal life has interfered with the project. If you're VRAM restricted, consider learning ComfyUI since it handles memory constrains well. I was able to run SDXL models on a 1060 with 16GB of RAM. It would offload most of it to the system RAM, run like shit and make the system unusable, but it would work instead of shitting itself.
 
AI-generated videos now possible with gaming GPUs with just 6GB of VRAM
Lvmin Zhang at GitHub, in collaboration with Maneesh Agrawala at Stanford University, has introduced FramePack this week. FramePack offers a practical implementation of video diffusion using fixed-length temporal context for more efficient processing, enabling longer and higher-quality videos. A 13-billion parameter model built using the FramePack architecture can generate a 60-second clip with just 6GB of video memory.
 
Back