AI Art Seething General

Status
Not open for further replies.
How does AI art generation work anyway? Like does "scraping" really copy "steal" whole images by artists?
Training is basically a process of creating mathematical associations between images and captions. Modern AI works as well as it does because of the immense amounts of data used. Basically, the more captioned images your model is trained on, the better images it will produce. This is assuming that most of the images are high quality and have relevant and detailed captions.

The product of the training process is a model weights file. For Stable Diffusion 1.5, this was about ~4GB. For Stable Diffusion XL and variants it's about ~7GB. For bigger models like Flux and Qwen Image it's ~20GB. SD1.5 was trained with 600 million images, and the others with even more. Obviously these file sizes aren't big enough to contain copies of the training data. The files contain high dimensional matrices which map relationships between text and image data, allowing generation. So when you enter a prompt, the model is basically doing some math on your words to come out with the resulting image.

The key insight that's enabled the current AI boom is that when training runs exceed a certain size, surprising things happen. If you train a small model on a single picture of a bird and then prompt it for a bird, you will get more-or-less that same picture. Congratulations, you've invented the world's shittiest compression.

But if you train a model on millions of pictures of birds, all of different types, in different styles, different positions and so on (with detailed captions describing all that), then at some point it learns the essence of what a bird is, or at least how to draw one. A lot of antis will take issue with the word "learns" here because the AI is not human, it's not learning like humans do, but I don't think we really have a better name for it. Human artists learn by observation and some analogous process happens with image generation AIs.

The other part of the process is diffusion. The model takes two inputs: an initial canvas and a prompt. In the case of pure text to image, the canvas is a mass of randomly generated noise, something like this:
random-2d-color.png
The model then iteratively denoises the image based on the prompt you gave it. You can think of this as kind of like trying to see pictures in the clouds. Over enough denoising steps, this random noise resolves into a picture containing the elements you prompted for, based on what the model has learned about the aspects of your prompt.

You can also start with your own image instead of using random noise: this is basis of image to image. The model will then blur your input and apply the same denoising process to it. More advanced interfaces allow you to control the amount of denoising to be done and you can achieve various things with that, from smoothing out lines and blending disparate elements of an image to changing its art style. Here's a fun post (archive) about that from the early Stable Diffusion days.
In any case and however an AI art generation works, the end results are usually something that's not substantially similar to art by human artists -- much less being "pirated" of paid stuff. BTW, I'm not a lawyer, but I'm pretty sure copying a work to train an AI -- assuming that's what's done -- is not necessarily copyright infringement in and of itself, if the copy used for training isn't redistributed online.
I think this tweet from the AI Derangement Syndrome OP is really apropos:
1722796087620-png.6271783

A key part of the NYT lawsuit against OpenAI seems to be that if you prompt ChatGPT with a large portion of an NYT article, it will produce the rest of the article and therefore can be used to bypass the paywall. But that's maybe the lamest thing you can do with AI and not really what it's for. You can commit copyright infringement with these tools, just as you can with PhotoShop or Word, but to argue that any output is de facto infringement because it is based on training data seems to me to be an insane expansion of copyright as a concept. Like, some people are basically proposing that every time anyone generates an image, $0.0000001 should be paid in royalties to every fanartist on Twitter for their infinitesimal contribution to the weights. You can call this stuff "high-tech collage" or plagiarism or art theft, but you're really stretching the definitions of those terms.
 
Last edited:
something like this:
When I was using some AI art generator, I sometimes got pics that looked like that.

I think this tweet from the AI Derangement Syndrome OP is really apropos:
So it's like I saw it earlier then?

If a human artist were to use bits and pieces of art by other human artists to make a collage, only the copyright fanatics -- like those on deviantART -- may claim it's "art theft" then. And if a human artist merely references a bunch of different works making some drawing, only the most fanatical copyright fanatics may claim that's "art theft". But, if a generative AI does the same things*, suddenly a whole lot of Redditors and SJWs -- and of course copyright fanatics -- would most likely REE that it's "art theft".
 
I'm essentially reposting from elsewhere, but it's definitely worth bringing up in this thread.

I was recently reminded of The Oatmeal (Matthew Inman), the old webcomic artist with occasional wordy and woke tendencies in similar fashion to XKCD. Also the creator of the casual card game Exploding Kittens that you might have seen alongside board games on retail store shelves. Upon this reminder, I checked to see if he's still around, and yes, not only is he still making comics on occasion, but he also premiered an Exploding Kittens cartoon show on Netflix last year.

Anyway, The Oatmeal's most recent comic from just a few days ago is coincidentally a rant about AI art:
1760073042771.png


I'm not going to copy the whole thing here because it's thirty separate images. Just go to the link (or archive) where it's easier to read. But tldr:
  • Are you using AI art to express yourself or an idea without the countless hours of drawing practice? FUCK YOU.
  • Are you an artist using AI art as a tool to fill in the ancillary parts of your work (e.g., backgrounds) while you focus on the main parts that are important to you? You're putting artists who specialize in those ancillary parts out of work!
  • It's okay for an artist to use AI art as an "administrative tool" like Adobe's Magic Wand Tool. But then again... suffering for your art adds a "layer of richness. And in that regard, AI art is bankrupt."
1760072821725.png1760072836022.png
 
Last edited:
I was recently reminded of The Oatmeal (Matthew Inman), the old webcomic artist with occasional wordy and woke tendencies in similar fashion to XKCD. Also the creator of the casual card game Exploding Kittens that you might have seen alongside board games on retail store shelves. Upon this reminder, I checked to see if he's still around, and yes, not only is he still making comics on occasion, but he also premiered an Exploding Kittens cartoon show on Netflix last year.

Anyway, The Oatmeal's most recent comic from just a few days ago is coincidentally a rant about AI art
That's rich coming from Inman, whose professional background is in SEO (archive). The Oatmeal has always been a cynical trend-chasing operation, carefully SEO-optimized slop tuned to the tastes of the average redditor. I am certain that if Inman had started his comic today rather than in 2009, he would be leaning heavily on AI.
 
(Yes; he always talks that way in his videos.)

The background is that King Cradi hacks Pokémon GBA games (using binary tools, to those who are curious). He made the popular Emerald Mini, which is a smaller, faster-paced version of Pokémon Emerald. He also was working on a full-scale hack called 'Craditopia', but he seems to at least buried the project right now, since he is busy on a sequel to Pokémon Ruby.

In this video, he talks about how he made the title background.
  1. Take a scan from the Groudon card from EX Hidden Legends.
  2. Crop the art with Microsoft Paint.
  3. Pu the ar through the Upscayl AI upscaler.
  4. Use Photoshop Editor in separating Groudon's pattern.
  5. Use Photoshop Editor in making a black silhouette of Groudon.
  6. Colour the pattern blue.
  7. Use Photoshop Editor in shrinking down the ar to the GBA screen.
  8. Touch up the pixels with Hexmaniac Advance.
  9. Add a glowing ey effect with Hexmaniac Advance.
  10. Request pixel volcanoes from ChatGPT.
  11. Use Photoshop Editor in making black silhouettes of the volcanoes.
  12. Use Photoshop Editor in shrinking down the volcanoes to the GBA screen.
From what you can see in the original page, a lot of commenters bemoaned the use of AI, a few even ditching the game. Others brought up tha there are eadily-available free sprite resources or people willing to contribute their own custo-made sprites, one commenter even outright pixelling a volcano before trying to join King Cradi's Discord server. Another brings up the irony of using 'environmentally-unfriendly' AI when making a hack based on the pro-environmentalist Pokémon Ruby. However, there were a fe who defended him.
 
Last edited:
AI art is inherently less valuable, it is already hurting real artists and it's going to cause a shitton of insane social problems and exacerbate all of the current ones. I don't really see an upside other than the frivolous novelty of writing a thing you want to see and getting a picture that sometimes looks like it.
 
insane expansion of copyright
It is. If Jasper Johns and Warhol could ply their respective trades without too much legal trouble then what AI models do is so far beyond that in terms of 'transformative' that it doesn't hold much legal weight.

Logically, at least. What happens in court has only so much relation to that.
The Oatmeal has always been a cynical trend-chasing operation
Oh boy has it. Mediocre art combined with analogies whose main attraction is they say 'fuck' a lot. Has the guy been forced to make a retraction to his breathlessly ecstatic Model S review back in the day?
 
it's not learning like humans do
I don't agree. It is. If I want to know what a bird looks like then I go on google and look up the name of the bird. If I just look at one of them then I'll probably fuck up some details, if I look at loads then eventually I'll get all the details correct. If you don't know what something is then you look at images of it and then learn what it is. It's not learning in the same way in the sense that like it's silicon instead of neurons but it's still functionally pretty close. It's all just electrons anyway and electrons don't even fucking exist. Clouds of probability make up everything and here I am being called a gambling addict for living in a bookie smh eating food is more probability than the horses are and that doesn't even have any payout other than you're fat.
Isn't it strange how often they tell on themselves. I've seen many many ai art accounts that get more likes than this post. And I've seen many artists enjoy using it. A better analogy is that it's like a fast food restaurant in the edges of a city that you always think is empty but that's because you work a night shift. You're just admitting that you're insulated from the average person.
 
AI art is inherently less valuable

Am I really gonna pay some asshole burnout with a shitty attitude to do a basic portrait for an inflated price, hoping they don't run with the money, or, am I going to go the other two routes: type the words or learn to do it myself by hand?

it's going to cause a shitton of insane social problems

Oh, you mean like all of the BlueSky(Formerly Twitter) artists who constantly bitch about how they are fucked sideways by their bad financial decisions and time management, who form parasocial bonds with their pay pigs/personal army? The current paradigm of "art culture" cannot die soon enough.
 
You're putting artists who specialize in those ancillary parts out of work!
What is it with the seemingly sudden push to claim that human artists need to be paid all the time for anything they ever make anyway?

And what if I have some drawing skill and I make what I want myself, instead of paying out the ass for some jerky artist to do it? :smug:

The current paradigm of "art culture" cannot die soon enough.
I think that just before AI-generated art hit the scene, a number of artists were claiming it's "entitled" to think commission prices are too high. Even in the crap economy.
 
Is Kurzgesagt funded by WEF or was it just meme?

An entirely AI-generated animation film titled "Mustafa Kemal" hits the cinemas in Turkey. It's about Mustafa Kemal Atatürk (the founder of Turkey, milked for money nowadays).
mustafa-kemal-2025103161038a278c37740094aca95b3ab65a66bde9d.jpg
https://m.youtube.com/watch?v=wjJzO9VBvBg (can't embed video on mobile, comments are closed

eta: here's the film's Instagram page:

You can see some clips from the film and other promotional content generated by AI. I didn't see comments pointing it's all AI, so I assume they curate the comments well.

The studio produced another animated film titled "Mustafa" few years ago. Unlike the other one, this seems to be an actual animation film not AI generated. The same page features clips from the Mustafa film, you can compare it to the new Mustafa Kemal film.
 
Last edited:
Oh, you mean like all of the BlueSky(Formerly Twitter) artists who constantly bitch about how they are fucked sideways by their bad financial decisions and time management, who form parasocial bonds with their pay pigs/personal army? The current paradigm of "art culture" cannot die soon enough.
No, I'm talking about the fact that boomers already can't tell if these pictures and videos are real or not. But sure let's sacrifice our ability to trust reality because bluesky people are faggots
 
No, I'm talking about the fact that boomers already can't tell if these pictures and videos are real or not. But sure let's sacrifice our ability to trust reality because bluesky people are faggots
If you think boomers falling for obviously fake images is anything verging on new or ai related you are on the same level of retarded as they are.
 
If you think boomers falling for obviously fake images is anything verging on new or ai related you are on the same level of retarded as they are.
How about when it's 2028 and someone generates a shaky phone video of trump in the oval office talking about a false flag suicide bombing attack on his voters or something? Do you think this will never be a problem?
 
No, I'm talking about the fact that boomers already can't tell if these pictures and videos are real or not. But sure let's sacrifice our ability to trust reality because bluesky people are faggots
Hand made art: Realism to the point of not being able to tell if it's a photo or not is celebrated as an amazing achievement.

Video games: graphics too real to tell from reality are considered an amazing achievement

AI videos and art: Too real, don't like.
 
How about when it's 2028 and someone generates a shaky phone video of trump in the oval office talking about a false flag suicide bombing attack on his voters or something? Do you think this will never be a problem?
Why would I? Political propaganda has always used fake and misleading footage. Why is this any different? Nothing about AI is even remotely new. It is the exact same thing that we have had for centuries just made with a different medium. Niggercattle will continue to be niggercattle like they always have been. It's going to be used for countless nefarious reasons regardless of what I do so I might as well have a good old wank in the meantime.
 
Why would I? Political propaganda has always used fake and misleading footage. Why is this any different?
"UHHH WHATS THE DIFFERENCE BETWEEN AN OUT OF CONTEXT QUOTE AND MAKING ANY FAKE VIDEO OF ABSOLUTELY ANYTHING WITH NO EFFORT???"

Nobody has ever been more retarded than ai people. Go play in traffic you stupid fucking faggot.
Hand made art: Realism to the point of not being able to tell if it's a photo or not is celebrated as an amazing achievement.
Yeah bro the day someone hand draws or spends 5 years in unreal engine creating photorealistic videos of politicians saying things they haven't said, I'll be against those too. The ease and the speed IS the problem for misinformation.
 
"UHHH WHATS THE DIFFERENCE BETWEEN AN OUT OF CONTEXT QUOTE AND MAKING ANY FAKE VIDEO OF ABSOLUTELY ANYTHING WITH NO EFFORT???"
People still rag on the "Very Fine People" quote from Trump for years. It takes little to no effort to propagate a baseless claim.

Meanwhile there are thousands of people who are hyper critical of any random footage, just because it could be AI, even if it isnt.
 
Status
Not open for further replies.
Back
Top Bottom