AI Art Seething General

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
If you spend enough time looking at LOLcows, you will notice tha they have a certain way of typing. Though its usage is normally confined to the Internet, th is the firs time I actually heard anyone actually speak in the LOLco way.

This is Colin speaking for the channel.

We've been getting a couple comments from people complaining about how we made this video, mostly because of our positioned cameras, what we wore, and how we expressed our frustrations about how AI is hurting the industry and people's jobs.

We know they were TRYING to be constructive, but they DIDN'T have to call us "screaming, poorly-dressed children".

So, PLEASE try and be respectful in the comments. We work REALLY HARD making these videos every week.
 
1743903138525.png
no u lol

 
The first painting ever Glazed is possibly the only thing with enough talent behind it to be worth stealing:
karla.png
Interesting.
1743931898073.png
Stable Diffusion seems perfectly able to interpret this, or at least, as able as usual. Now, I'm not training an AI, but I have a theory on an easy way to defeat Glaze.

1743931792188.png 1743931807355.png

Left is the original image. Right is the same image, sent through a slight blur filter. You can barely see the difference, but you can barely see Glaze, either. If anyone can/wants to test this, be my guest.
 
  • Thunk-Provoking
Reactions: lolcow yoghurt
Double post for a follow up, because I'm retarded and realized an obvious test.
1743974721146.png

This is the glazed version. I uploaded this multiple times, thinking I uploaded the glaze removed version by mistake, and hit the upload limit (lol). But anyways, I'm not convinced Glaze does shit.

Edit: With both.

1744010785184.png
Glaze:
glaze.png1744010811351.png

No glaze:
no glaze.png1744010831569.png
I am not convinced Glaze does anything.
 
Last edited:
Double post for a follow up, because I'm retarded and realized an obvious test.
View attachment 7186704

This is the glazed version. I uploaded this multiple times, thinking I uploaded the glaze removed version by mistake, and hit the upload limit (lol). But anyways, I'm not convinced Glaze does shit.

Edit: With both.

View attachment 7188788
I am not convinced Glaze does anything.
And artists wonder why they fall for lies badly
 
I am not convinced Glaze does anything.
To play devil's advocate, Glaze was designed to disrupt model training, which is quite different from CLIP interrogation or inference from a base image. Relevant bsky post (archive):

So they only even claim to do anything against the most high-effort and complicated form of style mimicry, training a LoRA or new base model. Amusing how they try to dunk on "AI bros" for not understanding these technicalities when they should be much more worried about their target audience having even less understanding.

If style mimicry works well enough with inference from a base image, people will stop training LoRAs, and then Glaze will have even less reason to exist.

style.png
 
Last edited by a moderator:
  • Informative
Reactions: Billow Riley

This woman has made it her whole life on her channel to bash on ai art every day in every video, and now because Trump is in it, she hates it because it's fascist now. Piss off nigga. You are a whore from Canada. Complain about your own Government and stop worrying about America's own shit. Bitch is very salty today.
> bitches about ai
>"shittyanimeart" twitter handle


These creatures need to burn.
 
Last edited:
So this is how it started:

1744058161872.png


You know you dun goofed when John fuckin' Carmack, legendary technobro and producer of Quake II, ratios your retarded take.

How it's going:

1744058415147.png


Dude doubles down and gets cooked even HARDER.

But wait, it's still going, he triples down:

1744059006566.png


Even Dave Plummer, former Microsoft dev, weighs in:

1744058769140.png


How it ended:

1744058628177.png


Dude is getting roasted in the comments, bigly. :story:
 
The persons actually shitting on dev work are bosses and CEO's.
It takes a lot of knowledge to make such a skillset.
But even as they were taught, so can be ai.

and the choice is even simpler today. Imagine you employ such a person today. Absolute menace to everyone.
 
So this is how it started:
Honestly as cool as I think it is I think it's a pretty fucking retarded approach to 'ai video games'. AI is already incredibly good at coding, I'd much rather see an AI coded video game rather than a glorified AI video generator with a videogame prompt. It's a cool tech demo but I just don't really see what the point is.
But wait, it's still going, he triples down:
This one's beautiful though. 'AI is just copying already existing code', you do know how most coding is done right? Most websites are built off of someone else's work, there's code and model asset stores for ue5 and unity and I think godot now? If you ask someone who codes how they solve a problem 99% of them will just tell you they go on SO and see if they can just copy someone else's work before even thinking about trying to do it themselves.

I did see this though. Idk I just fundamentally think they're overanalysing it way too much. Pretty long so spoilered images.
bfdefrtrfbeefrt - Copy.PNG
ebrfbefrbfrddefrt - Copy.PNG
bfedsredfsdefrs - Copy.PNG
bfesbfersd - Copy.PNG



I think it's kinda interesting to look at what someone else thinks when it isn't just autistic screeching. It's more interesting than the 500000th 'ai bad because it uses electricity' post. But to be honest I think it's mostly bullshit. I know that personally I have done nothing in the past year to improve dramatically. Yea ok I have learnt how to prompt better and learned the ideal weights and certain things that the AI loves or hates to do. But that's it. The real improvement has just come from the technology. I won't say too much but there's two niche characters that no one likes from a gacha game that is also pretty niche. I spent ages with pixai trying to get these two characters in the same image together and it would never work, even editing the images myself before denoising it still did not work, it would always create a homogenous character instead of two distinct ones. I swapped to the xl models and genuinely in one prompt it did exactly what I wanted it to do. Admittedly the xl models have made much cleaner art which is much easier to go in to photoshop and clean up before denoising, before I just couldn't be fucked to bother because it would take so much time whereas now it's much easier. But I would say that 95% of the reason why what I make looks better now than a year ago is simply just because the technology is better.

Even since the days of the shitty old models the AI was always pumping shit out in a mix of styles, I have no idea what that first one is meant to mean. They were always different styles, obviously AI yea, but it was never the exact same style, even within models you could get a wide range.

It also just doesn't really combine other art styles and 'combine with prompts', it's normally just a hyper generic style; it's not combining different art styles it's just using the most generic one. I have no idea what combining with prompts is even meant to mean. But I also just don't agree anyway. I mean I see a lot of AI stuff in my feed, I can almost instantly clock what is and isn't AI with a pretty high level of accuracy. It could just be from my own familiarity with the software but AI stuff stands out like a sore thumb, especially niji stuff, not in a bad way but it's almost always instantly obvious.

I do like how in this subtly anti ai post the hand drawing looks kinda AI generated, just the way that the pen appears to almost be becoming a finger. Fr though it's not getting harder and harder every day, maybe for some people who have psyopped themselves into feeling bad but realistically nothing has changed for actual artists in the past year. At least other than algorithm changes and other shit completely detached from AI. It is true though we do grow more and more every day, I am actually a form of slime mold and every 50 minutes I split off into another copy of myself.
The problem is his getting on the anti-AI train.
The first thing I saw was the bluesky url, my brain refused to read anything after that.
 
That won't do shit. The UK Government will allow AI to happen. Nice try.
So this is how it started:

View attachment 7191050

You know you dun goofed when John fuckin' Carmack, legendary technobro and producer of Quake II, ratios your retarded take.

How it's going:

View attachment 7191061

Dude doubles down and gets cooked even HARDER.

But wait, it's still going, he triples down:

View attachment 7191094

Even Dave Plummer, former Microsoft dev, weighs in:

View attachment 7191089

How it ended:

View attachment 7191080

Dude is getting roasted in the comments, bigly. :story:
Quake Dad needs to learn ai if he wants to stop acting stupid. Even bros from the '80s mention how coding is a nightmare. It still is today. Quake Dad never worked in his life. Thinks making games is just modeling and animation. It's beyond that, sweetie. If you don't code your game right? You are cooked as a game dev. Quake Dad isn't 50 yet to understand this mess he caused himself in.
The persons actually shitting on dev work are bosses and CEO's.
It takes a lot of knowledge to make such a skillset.
But even as they were taught, so can be ai.

and the choice is even simpler today. Imagine you employ such a person today. Absolute menace to everyone.
This is why some fans shouldn't speak for game developers like they are poor or hungry for work. Most of them made such successful games that they don't need to work again if this ai gaming stuff is successful and ethical. Of course, it's made by the company's own assets so unlike the idiots who think Doom AI was made from stolen fan games. This will work, and it could help even Nintendo make better games.


At 15:41 mark they talked about the ai stuff near the end of the video. Doug gives out his answer.
 
https://x.com/IsThisA3DModel/status/1908928514071441550

I remember seeing this exact exchange on 4chan about "soul". It culminated in someone posting a shitty sonic the hedgehog piece that looked like it was drawn by a child. After all the pretentious art faggots started slopping up the pure innocence of the childlike soul, the original poster revealed that his shitty drawing was in fact, AI generated.

firefox_egudZW3Yp2.png

I have literally no way to know if this drawing was actually done by a little girl or a machine, without a timelapse of the girl actually drawing it. There is no difference now.
 
Interesting.
View attachment 7183779
Stable Diffusion seems perfectly able to interpret this, or at least, as able as usual. Now, I'm not training an AI, but I have a theory on an easy way to defeat Glaze.

View attachment 7183743 View attachment 7183753

Left is the original image. Right is the same image, sent through a slight blur filter. You can barely see the difference, but you can barely see Glaze, either. If anyone can/wants to test this, be my guest.
Glaze is not meant to stop you from image-to-image interpretation but to poison the training by forcing the model to learn the wrong things.

First, I don't work with generative models or LLMs. All my training is related to vision systems, but I know a lot—and I mean a lot—about data and how you can affect systems training. It was part of my master's thesis, and so on. There are some finer points in variation when the training is performed. However, the overall concepts are the same with regard to good and bad training data.

This Glaze provides the training system with a wrong bias that can make it unstable; a single image will not affect this, but if enough are out there, it will diminish these systems' ability to generate.

It is as if you had an AI that could classify the difference between cats and dogs, and at some point, you added a ton of images of dogs, but they were all outside with a blue sky. The cat images are still diverse, the thing the network learns is not how a dog looks, but what a blue sky looks like. It is a bit like this where the generative model will learn the wrong lesson instead of the values related to the image. It will earn the values related to the glaze poisoning.

I hope things like Glaze work; I can't prove that it does. I hope it works because it would be funny.
 
Last edited:
I have literally no way to know if this drawing was actually done by a little girl or a machine, without a timelapse of the girl actually drawing it. There is no difference now.
Kid's art is usually really snotty, not within the lines and so on.
Then again, Random 10yr old:
1744120880625.jpeg

30 yr old feminist:
1744121272773.png


Perhaps AI is just another attack on traditional values.
 
  • Winner
Reactions: Coolio55
It's like the grift is in the name. Glaze, get your subscription farm, Glaze! Oh fuck, Glaze 2.0 has been proven to disrupt Stable Diffusion, OpenAI, etc. Trust me bro. All of it can be yours for 2.00 a month, but when we eventually release Glaze 3.0 we'll restrict that behind a 20 dollar a month subscription. #speculativefiction
 
  • Like
Reactions: Dawdler
Honestly as cool as I think it is I think it's a pretty fucking retarded approach to 'ai video games'. AI is already incredibly good at coding, I'd much rather see an AI coded video game rather than a glorified AI video generator with a videogame prompt. It's a cool tech demo but I just don't really see what the point is.
The Quake demo is just one example of what can be done with the tech. Imagine if it was trained on the real world and you had perfectly realistic visuals without texture pop-in or LOD models. Imagine if it had an infinite level of detail or unlimited particles, like you can get right up to a plant and see every leaf, or every vein on every leaf, moving realistically and not static or looping. Imagine if there was an actual simulation running alongside it to help keep the generation on track. And that's all just thinking of it in a gaming context, and not the overall implications of "fully generated world/environmental model based on nothing but video training data." What if you could train it with more than video?
 
Back