Stable Diffusion, NovelAI, Machine Learning Art - AI art generation discussion and image dump

Impressive but doesn't show the whole process. This is on wood, but much smaller so you can appreciate the detail.
Yeah, technically, robotic arms can carve things, but the result is pretty rough when close up and very obviously milled. Not quite a hammer and chisel. Sure you could sand it out but it still wouldn't be the same.
99.99% of humans can't come close to anything similar so its hardly a "test of humanity" or whatever. And the reason you use a chisel is because you don't have enough strength to carve something as hard as marble, or are trying to get big chunks of wood, not because its some kind of measure of being. If you could carve marble with your bare hands and not die of old age before being done you would do it, that's why we carve wood, and pretty much all modern sculptors use some kind of rotary tool to carve hard materials now a days.
 
99.99% of humans can't come close to anything similar so its hardly a "test of humanity" or whatever. And the reason you use a chisel is because you don't have enough strength to carve something as hard as marble, or are trying to get big chunks of wood, not because its some kind of measure of being. If you could carve marble with your bare hands and not die of old age before being done you would do it, that's why we carve wood, and pretty much all modern sculptors use some kind of rotary tool to carve hard materials now a days.
Was your point not if a robot could currently convincingly carve a statue like a human could potentially? I wasn't acting like it's a test of humanity. I was saying a robot can't do it like someone who's trained their life to do so could. Maybe in ten, twenty years, but in the meantime we have 3d printing to do most of that job.
 
Ladies and guntlemen, I present to you Ethan "Gunt" Ralph, host of the Killstream:
tmpgqs0n0wd.png
it even got the quad tits right
 
I love this series. The quality of shitposts has dramatically improved thanks to AI generated art in this very thread. Who would've made such an effort without it? I loled irl several times reading this thread. And people tell me this isn't art? It's the purest expression of human emotions. We enjoy looking at these creations. They make us happy. What's not to love?
I concur and also I think it be great way to fight censorship and make fun of politicians in the future. Sure they try to regulate and stop it but the cat is already out of the bag. I am going see if can open a gallery in hipster parts of New York or pretentious Santa Fe. Can't figure out how to twerk it to fix the spoons.

Hitler eating ice-cream at Disney World/Disney Land

tmp32exd7ug.pngtmpdiwwhxo4.pngtmps1njmiyg.pngtmpwa5m3zhg.pngtmpsqersm_c.pngtmpif3kbx33.png
 
Last edited:
Was your point not if a robot could currently convincingly carve a statue like a human could potentially? I wasn't acting like it's a test of humanity. I was saying a robot can't do it like someone who's trained their life to do so could. Maybe in ten, twenty years, but in the meantime we have 3d printing to do most of that job.
My point is that all you need is to train an AI to create 3d models instead of a 2d drawings and everything else is pretty much already done.

I'm sure people are already doing that, they just haven't published it yet and 2d drawings are more immediately accessible for the masses. I would give it a year or two, maybe even less if someone manages to repurposed existing AI models for this.
 
My point is that all you need is to train an AI to create 3d models instead of a 2d drawings and everything else is pretty much already done.

I'm sure people are already doing that, they just haven't published it yet and 2d drawings are more immediately accessible for the masses. I would give it a year or two, maybe even less if someone manages to repurposed existing AI models for this.
Training to make actual 3d models is a hell of a lot more difficult than making a flat image from noise. The closest we have is generating a faux 3d model from multiple angles of a 2d one, which ends up being fairly rough and would be unusable in professional work. You could probably generate a real model from these frames if you used a photogrammetry application, but it would be even rougher after that. Currently the best thing to do is just to use a 2d image as a base and make the model yourself from there.
 
Alright, downloaded and set this up following the Voldy guide and, using the default Huggingface model with the following text prompts:

Prompt: desert mesa canyon landscape at sunset, masterpiece, best quality
Negative Prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry

I got this:
1666125565044.png
As somebody who lived in Arizona, this looks 100% accurate.

I wonder if there are ways I can out fill this to a widescreen resolution (or ultra-widescreen even) just to see how far I can push landscape generation...
 
I concur and also I think it be great way to fight censorship and make fun of politicians in the future. Sure they try to regulate and stop it but the cat is already out of the bag. I am going see if can open a gallery in hipster parts of New York or pretentious Santa Fe. Can't figure out how to twerk it to fix the spoons.

Hitler eating ice-cream at Disney World/Disney Land
View attachment 3747889View attachment 3747902View attachment 3747909
View attachment 3747887View attachment 3747890View attachment 3747894
I love how most of them are reasonable, and the first one on the bottom even looks historical, like Hitler visits disneyland to make Germany think he's a good guy. And then the middle one on the top is monster Hitler psychically lifting ice cream into his huge gaping maw.
 
Alright, downloaded and set this up following the Voldy guide and, using the default Huggingface model with the following text prompts:

Prompt: desert mesa canyon landscape at sunset, masterpiece, best quality
Negative Prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry

I got this:
View attachment 3747922
As somebody who lived in Arizona, this looks 100% accurate.

I wonder if there are ways I can out fill this to a widescreen resolution (or ultra-widescreen even) just to see how far I can push landscape generation...
I believe what you want is called outpainting, the inverse of inpainting.
This tutorial does go over it, but what comes out is pretty rough.
 
You know, the more I think of it, I'm actually kind of interested to see what people are going to be able to create, not having to be weighed down by endless hours of preparing, editing, the countless logistics that slow down the creative process. What will be the product of spontaneous ideas, created much quicker? Could this be used for improvements in procedural generation? What if someone could literally think up a film and use these programs to essentially "dream" the film into existence? There would still have to be multiple ideas adjusted and tweaked, but an author could essentially have the movie play out as they are writing it.

Kind of like the holphoner from Futurama
 
I concur and also I think it be great way to fight censorship and make fun of politicians in the future. Sure they try to regulate and stop it but the cat is already out of the bag. I am going see if can open a gallery in hipster parts of New York or pretentious Santa Fe. Can't figure out how to twerk it to fix the spoons.

Hitler eating ice-cream at Disney World/Disney Land
View attachment 3747889View attachment 3747902View attachment 3747909
View attachment 3747887View attachment 3747890View attachment 3747894
This is the best thread. Thank you for doing these, they transport a message that is understood by everyone - AI might have helped to align those actual pixels, but the creative spark came from a human mind and connects with our human minds. It's both hilarious and offensive on all the right levels. I love it.
 
You know, the more I think of it, I'm actually kind of interested to see what people are going to be able to create, not having to be weighed down by endless hours of preparing, editing, the countless logistics that slow down the creative process. What will be the product of spontaneous ideas, created much quicker? Could this be used for improvements in procedural generation? What if someone could literally think up a film and use these programs to essentially "dream" the film into existence? There would still have to be multiple ideas adjusted and tweaked, but an author could essentially have the movie play out as they are writing it.
We're a bit of the way there, but imo text ai is going to have to be pushed quite a lot along if you're gonna want anything coherent to come from that.
 
Training to make actual 3d models is a hell of a lot more difficult than making a flat image from noise. The closest we have is generating a faux 3d model from multiple angles of a 2d one, which ends up being fairly rough and would be unusable in professional work. You could probably generate a real model from these frames if you used a photogrammetry application, but it would be even rougher after that. Currently the best thing to do is just to use a 2d image as a base and make the model yourself from there.
No it isn't, its mostly just different training and a different output.

 
  • Like
Reactions: Aspiring Artist
No it isn't, its mostly just different training and a different output.

Color me surprised we're at this stage, but still what is produced isn't nearly as coherent as the flat image. A lot of work is going to have to be done to make the model usable in most prompts. You have it generate a decent model of the basic pumpkin here, but it's covered in bumps, it has a million eyes, and I'd hazard a guess there are quite a few reasons you can't straight up use what it generates if you want it to mesh with already existing models.
 
I've decided to explore some other models beyond the official Stable Diffusion one so I added in more but haven't explored them yet. I'm sure I'll be sick of this in an hour or two, but it's not a bad way to fill a rainy afternoon.

Now that I've started exploring the various GUI controls I'm seeing more of the real world potential for AI illustration. Creating custom models and knowing what positive and negative prompts to use could be a sideline/hobby all in itself.


Anyway, here are some tributes to one of my pet cows, the late Fat Jen:

This is a donut cheese angel with Jen's massive face used for a source image.
Er, too damn cute.
00009-1345483140-cheese donut  angel.png

So I brought in a cat and copied an artistic style. Still not right.
00044-3048881987-cat, cheese, angel, Anton Mauve.png

Nope and nope.
00021-1675012291-cheese angel.png 00022-3229105473-cheese angel.png

Dead Cheese plus Jen's face. That's better!
00023-3800136631-cheese, angel,dead.png



Since sweet Luci the loveable schitzocow (if you know who she is, you know - yes I'm a Beauty Parlor kiwi) has reappeared online, I decided to see if AI could make her unicorn dreams come true. It can!
unicorn, magical, sparkly plus Lucinda's face
00064-957081916-unicorn, magical, sparkly,, Eric Peterson.png 00063-2056195922-unicorn, magical, sparkly,, Eric Peterson.png 00058-2226446703-unicorn, magical, sparkly, Aubrey Beardsley.png


Oh, this is what I got by asking for a scary Halloween kiwi monster:
00019-2755665189-halloween, kiwi, scary, anime, monster_, Carl Walter Liner.png
 
Back