AI Art Seething General

Someone ought to feed an AI a bunch of reddit and xitter to create an AI that takes the "jobs" of people being whiny bitches on social media. That would really piss them off.

Imagine an AI reddit supermod.
Isn't that what Reddit is already kind of doing? They are getting into AI space; I think.

If they weren't useless idiots they wouldn't be being replaced in the first place. Sucks to suck.
I've yet to make the leap of "what is art and not art?" of considering digital illustrations as art. Art, to me, has to be something physical and tangible; an object preferably with a limited amount of copies. Digital illustrations that remain on computer screens are not art.


Those "anti AI glazed" pictures give me headaches. Reminds me of the chromatic aberration abusing tards on tumblr. They even will say it looks shit because of it but post it anyways.
View attachment 5865879
View attachment 5865880
Not saying this art necessarily sucks, just the glaze ruins me giving it more than an annoyed squint.
The thing about the glaze is it'll just get scraped and associated with the artist's work; both by humans and AI. Art forgery is still a problem today and even artists of the past had their works forged. Not once did it occur to any of these artists past and present diminish their work and effort over an imperceptibly tiny threat of theft or forgery. I'll take these faggots a little more seriously if we meet in person with their art proudly displayed to the general public; you know, where you get to receive brutally honest feedback that your art is mid.
 
If you can't create anything without the help from some strings of code that mash up stolen images together to dictate you what to draw
1. Not how neural image gens work, see
From a few posts ago.

two, search sucks ass these days and AI tools are now much easier to corral and *actually* get what you're looking for, even considering the frankly ridiculous amount of censoring and bias in them.

And hell no to using a site like Pinterest as a source. Also weird you'd assume using generated images as a base means that artist is just copying from them directly. How would that be any more likely than someone tracing over images they grabbed from Google? That shit happens and has happened plenty.
 
How would that be any more likely than someone tracing over images they grabbed from Google?
Even from the link, by all understanding AI is just referencing the data it learned to create an original image.

And Twitter artists will kill themselves unprompted before admitting they do exactly the same. Having learnt what this and that looks like, having studied this and that style, having referenced this and that image.
These niggers’ egos are so inflated they genuinely believe the could sue if AI were to ‘’’’include’’’’ their visual excrement into the ‘’’’collages’’’’ of ’stolen’ work, among the millions it references.
 
But what if the digital drawing was lost with no archive of it though?
All things die. That's what makes art collection, coin collections, whatever collections special. Digital illustration work can be reproduced indefinitely, be plastered on walls, cases, phones, whatever. Digital illustration used to be its own recognized skill and it was an honor to be known as an illustrator. Most of all, lucrative for the artist. Digital illustration's reproducibility makes it an overlap subset of business/commercial/advertising art. There is a big drive for an artist to "make a living" off their work which makes the illustration path the best one (read easiest) path to take. The problem with every "artist" taking that route you're always going to be working for someone else. Usually, that's a company where you learn the corporate standards that invariably seeps into you. Inevitably, we are left with highly uncreative people who thinks corporate MS Paint Clip Art should be put next to Starry Night.

Also, walk around an art museum, even a MOMA, do you get the same sense of scale and weight, looking at a painting twice your height as you do on your phone screen? How about a carefully carved statue that rises like a monolith? Do you get the same sense scrolling through 500px, Instagram, or whatever other place "artists" post their work? Honestly, the only subset of artist I still get any joy in and respect are graffiti artists. They don't ask for anything, they don't insist on anyone, they are the rawest form of human expression that exit simply to exist for a brief moment in time.
 
I've seen a lot of people on DA bitching about AI art, and you know what's the ironic thing? Their galleries are full of those shitty fill-in memes which barely takes any effort either or porrly drawn fetish art. At least AI art is more appealing than that garbage. Hell, even AI art can be helpful for ideas for art, examples of people redesigning/redrawing AI Pokemon and other fan made stuff.
 
I've seen a lot of people on DA bitching about AI art, and you know what's the ironic thing? Their galleries are full of those shitty fill-in memes which barely takes any effort either or porrly drawn fetish art. At least AI art is more appealing than that garbage. Hell, even AI art can be helpful for ideas for art, examples of people redesigning/redrawing AI Pokemon and other fan made stuff.

/beg/ sperg are the loudest to complain because they got nothing better to do than doompost while pros quietly add aI into their workflow. As they should lol. If makes them feel better, Ai could never replicate the sheer autism their art reeks of. There's probably lora for inflated knee kinks out in the wild.
 
I also seriously doubt it can't be bypassed with something like img2img or ControlNet anyway. They're just making their art shit for no benefit, because they're retards. That's why they're being replaced.
It'll absolutely do nothing against img2img because that's a highly controllable process with full user interaction that's fiddly and requires a lot of tweaking in the first place. I often add noise to source images on purpose when doing img2img because it can even help. Of course it's not even meant for that, but I've seen AI complainer retards test these things with img2img and declare victory because they don't know what they're doing.

What it's supposed to do is poison them as training data (specifically for style transfer which is a fairly rare usage nobody really gives a shit about anyway? although the paper suspiciously avoids mentioning this specificity in the abstract), as though the work of any loser artist dumb enough to fuck up their art like this wouldn't be culled during the quality filtering stage of data preprocessing.
In that sense I'm sure the adversarial noise does what it's supposed to for the models it was designed for, but all that paper or ones like it do is highlight a technical vulnerability that'll be corrected... except after skimming the paper I'm not sure it wasn't a scam in the first place because except for very particular images they'd probably have been transformed enough during normalisation for last-gen models that all the 'filter' would do is make them look shitty. They say it doesn't but I don't believe them.
 
What it's supposed to do is poison them as training data (specifically for style transfer which is a fairly rare usage nobody really gives a shit about anyway?
I think realistically most usage of this will be to replicate the style of incredibly popular media, like what we've seen with those AI pixar memes, and it won't be used for largely unknown concept artists.

I'm just saying, as far as styles go, the ones people want to copy are not exactly the "little guys" They will be people who are already titans or household names in the industry: Tim Burton, Jhonen Vasquez, Akira Toriyama, etc, they will be the biggest victims of style transfers.
 
I think realistically most usage of this will be to replicate the style of incredibly popular media, like what we've seen with those AI pixar memes, and it won't be used for largely unknown concept artists.

I'm just saying, as far as styles go, the ones people want to copy are not exactly the "little guys" They will be people who are already titans or household names in the industry: Tim Burton, Jhonen Vasquez, Akira Toriyama, etc, they will be the biggest victims of style transfers.
Yeah, but you get that from regular model training. As in you bake a Pixar model/LoRA/whatever and just gen once with that, rather than generating images then going through a secondary style transfer process (which is all Glaze supposedly defeats).

An exception might be for realtime style transfer of rendered scenes, like for a videogame that looks like a Ghibli movie or whatever. But this'd never do anything about a bespoke system for a project like that, and yeah there's never going to be a market for making your shit look like deviantart crayon scrawlings on purpose.
 
An exception might be for realtime style transfer of rendered scenes, like for a videogame that looks like a Ghibli movie or whatever.
Or an actual non-AI example, Cuphead, which was a tribute to Fleischer Studios cartoons. It will be entirely legal to use Swing, You Sinners by the end of 2025 or so when it enters the public domain, but you could already make AI in a similar style without violating copyright (imo). Just not the actual characters (yet).
 
Here it is boyos, the freighter of anti-ai cope has arrived from tiktok! Prepare to suffer!






 
I heard some people had developed models that only take a single (digit) generation steps to create good results. They said it's good for animations iirc.
For a realtime neural style transfer system that's what you need but I don't think I've seen any super great examples yet. Plenty of unreal engine demos if you look.
For diffusion-type systems there's progress (Lightning SDXL is fun to play with, though the 1-step version is a work in progress last I checked) but they aren't quite realtime on consumer hardware. Fast though, like 1-second gens that look almost as good as 1-minute gens with other models, just a lot more limited in terms of subject/creativity due to aggressive model pruning. But that's pretty much what you want if the goal is outputting a single style so it could be an interesting approach even though style transfer isn't really the focus (with even more pruning, and because img2img is faster than raw genning).

In both cases the problem is temporal stability, like you can get it looking like a Munch painting only for your big swirly features to appear and disappear in random places as you move the camera. So I'm curious what's meant to be good for animations.
 
Back