AI Derangement Syndrome / Anti-AI artists / Pro-AI technocultists / AI "debate" communities - The Natural Retardation in the Artificial Intelligence communities

> Artists don't like the fact that they need to financially support a company to use an AI protection tool that is infamously costly in performance.
I find it funny that people still think that they are at all safe from AI as long as some esoteric filter (which has been proven to be ineffective) is used. Like are we really going to assume every AI bro is so braindead that they're not just going to screenshot the image they want to use? The exact same shit happened to NFT bros and the artists fell on the same trap.
By the way this Duchess Celestia chick hardly does anything other than bitch about AI and about how "ITS OVER" TM for artists. I used to watch her but then realized almost all of her content is like this so I unsubbed.
 
By the way this Duchess Celestia chick hardly does anything other than bitch about AI and about how "ITS OVER" TM for artists. I used to watch her but then realized almost all of her content is like this so I unsubbed.

That's just a lot of art-tubers in general. If they aren't doing oh so relatable quirky art challenges, art tutorial #84838343, or a "relatable art struggles" video, they're going to whine about something only the terminally online gives a flying wrecking ship about, whether it's AI or the latest TikTok/Instagram drama that, again, only the terminally online cares to spend hours of their day whining about.

I'm also seeing a lot more doomer videos from artists, too. I think nihilists need to revoke their medal for how much "IT'S OVER" (TM)-ism the art community's been at for quite possibly many years, probably even before AI became mainstream.
 
I've always used em-dashes and even on ancient computers went out of my way to make them accessible to me for print everywhere where that was an option. I guess they trained all LLMs on my shit. It makes perfect sense as I am clearly very clever and ever so smart.

I do understand the talk about AI being a bubble because it obviously is, this is not sustainable and some companies/investors will end up broke or with very bloody noses. Same happened with computers and the internet. Didn't mean these technologies "went away" though and I do not understand how people somehow come to that conclusion. It really is like all the 80s people telling me with perfect confidence that computers at home are fad nobody is going to talk about in five years. I do understand that these are young ones that weren't around then but I don't understand why they can't learn from recent history. The parallels are very striking.
 
I find it funny that people still think that they are at all safe from AI as long as some esoteric filter (which has been proven to be ineffective) is used. Like are we really going to assume every AI bro is so braindead that they're not just going to screenshot the image they want to use? The exact same shit happened to NFT bros and the artists fell on the same trap.
How exactly do they bypass the filter? Through screenshot, I mean.
By the way this Duchess Celestia chick hardly does anything other than bitch about AI and about how "ITS OVER" TM for artists. I used to watch her but then realized almost all of her content is like this so I unsubbed.
That's just a lot of art-tubers in general. If they aren't doing oh so relatable quirky art challenges, art tutorial #84838343, or a "relatable art struggles" video, they're going to whine about something only the terminally online gives a flying wrecking ship about, whether it's AI or the latest TikTok/Instagram drama that, again, only the terminally online cares to spend hours of their day whining about.

I'm also seeing a lot more doomer videos from artists, too. I think nihilists need to revoke their medal for how much "IT'S OVER" (TM)-ism the art community's been at for quite possibly many years, probably even before AI became mainstream.
I thought she was one of those art-tubers who ranted and gloated against AI content and said "IT'S OVER NOW FOR AI BROS!!!!". These people need to STFU, if nothing else.
 
I find it funny that people still think that they are at all safe from AI as long as some esoteric filter (which has been proven to be ineffective) is used. Like are we really going to assume every AI bro is so braindead that they're not just going to screenshot the image they want to use? The exact same shit happened to NFT bros and the artists fell on the same trap.
By the way this Duchess Celestia chick hardly does anything other than bitch about AI and about how "ITS OVER" TM for artists. I used to watch her but then realized almost all of her content is like this so I unsubbed.
There's so many digital artists who slap on annoying watermarks onto their works to try to prevent people from training on their output. Some of them are pretty innocuous but at a certain point it starts to ruin an art piece, which is pretty ironic. Thankfully, the professionals don't care (since they have real jobs) but it's pretty funny seeing some mid tier Twitter artist screech about it as if anyone wants to scrape your shitty tumblr furry art.
 
There's so many digital artists who slap on annoying watermarks onto their works to try to prevent people from training on their output. Some of them are pretty innocuous but at a certain point it starts to ruin an art piece, which is pretty ironic. Thankfully, the professionals don't care (since they have real jobs) but it's pretty funny seeing some mid tier Twitter artist screech about it as if anyone wants to scrape your shitty tumblr furry art.
At least the glaze shit died off. The fucking worst trend in art outside of the coomer shit. Yea just put every post you make through a kaleidoscope and somehow that fixes everything even though the top result for 'glaze ai' was a video of some tool removing it perfectly.
 
How exactly do they bypass the filter? Through screenshot, I mean.
I dont think its the screen shot that bypass it, people often mistaken what the purpose of glaze and nightshade is. It's meant to confuse training by incorrectly identifying the images so they can't accurately create latent images.

The problem is that the glaze can only work depending on what model it's training, and how the algorithm for training works which can differ wildly. From what I understand Glaze only works on select popular models, not even mentioning how there can just be as much financial support put into creating a counter-adversarial algorithm that can overcome this. I don't think it's that far-fetched to make an AI model that can perceive these disturbances that are still visible to the human eye and just recreate the image with out it.

Also I'm really skeptical of Nightshade, because it sounds a little too much like snake oil.
 
I dont think its the screen shot that bypass it, people often mistaken what the purpose of glaze and nightshade is. It's meant to confuse training by incorrectly identifying the images so they can't accurately create latent images.

The problem is that the glaze can only work depending on what model it's training, and how the algorithm for training works which can differ wildly. From what I understand Glaze only works on select popular models, not even mentioning how there can just be as much financial support put into creating a counter-adversarial algorithm that can overcome this. I don't think it's that far-fetched to make an AI model that can perceive these disturbances that are still visible to the human eye and just recreate the image with out it.

Also I'm really skeptical of Nightshade, because it sounds a little too much like snake oil.
I see - thank you. So some random person can make a new algorithm and fuck up the "solutions".
 
There's so many digital artists who slap on annoying watermarks onto their works to try to prevent people from training on their output. Some of them are pretty innocuous
Have any of them considered subtly inserting slurs in different languages onto everything all written in varied styles and fonts? Most wouldn't want a cunt or nigger snuck into their art, but a عبد? Hey, maybe it's just a strange mark on the page! Who really knows? I imagine it'd complicate general A.I. scrappings for a while given just how many slurs and fonts there are in the world.
 
I said it before in another thread but people really, really underestimate how much (and now with the help of AI which is incredibly important) data validation and classification happens before anything goes into a dataset for training. People just think companies cram every bit of fanfiction/tumblr art into their models but it's just not really what happens. It's basically the secret sauce that makes a model good. Few companies really talk about their methods but what little you can glimpse in technical reports shows the scale quite impressively.

Also In my opinion and even reading what anthropic put out about the topic, there's no reliable way to poison datasets. Just the nature of the thing and we're technologically beyond that point where that would work.
 
https://bsky.app/profile/phineas.bsky.social/post/3m4qd6xyz5s2l (archive)
https://www.inprnt.com/gallery/phineas/prompting-with-hitler/ (archive)
bafkreig7b3gg6i2aotwqo4o7c2w34xsduupwjohljimrajfm3ittzs4rdu.jpg

Not a parody. It's this artist's true and honest opinion:

https://bsky.app/profile/phineas.bsky.social (archive)
very-chill.webp
https://bsky.app/profile/phineas.bsky.social/post/3lstevcqmls23 (archive)
 
there's no reliable way to poison datasets
Depends on what is considered poisoned data. I doubt that the normal faggot tech nerds filtered out 7 decades of neo-marxist fan fiction that had the label "academic" attached to it. The matter is, all models are inherently poisoned already on an empistemological level
 
Stability defeats Getty in the UK. Another win for the Toe, or something.


Main takeaways:

  • Everyone suing AI are retards who don't bother to learn the first thing about how the technology works. Basically every major lawsuit has involved the claimant pruning their initial sputtering about theft, and Getty was no exception, they quickly amended their complaint to a weaker one.
  • Quote from the judge: Stable Diffusion’s AI didn’t infringe copyright because it doesn’t “store or reproduce any Copyright Works (and has never done so).”
  • Judge says Stability does produce the Getty watermark and that's infringement but there was no evidence of how widespread it was or if it even constituted any damage to Getty. (How the fuck is this Stability's problem? Photoshop can produce infringing images too, but it's the user's fault for what they do with it afterward.)
Weird how judges all over the world seem to keep coming to the same conclusion, once they learn how the tech works.
 
Depends on what is considered poisoned data. I doubt that the normal faggot tech nerds filtered out 7 decades of neo-marxist fan fiction that had the label "academic" attached to it. The matter is, all models are inherently poisoned already on an empistemological level
It was found that text LLMs get brainrot from incorporating too many social media posts into their datasets: https://llm-brain-rot.github.io/

So much like real people, the more braindead low quality real slop you ingest, the more likely it's going to affect your own outputs. It's concerning since large LLMs no doubt pay for data from Twitter and Reddit. I assume it's much much easier to filter for 'poisoned' datasets for images and tools to remove watermarks/distortions are aplenty so people are fighting a losing battle.
 
Back
Top Bottom