AI Art Seething General

TLDR: presenting other people's derivative slop as his own, to scam low IQ kikes or facilitate money laundering is his medium, not the canvas or clay.
So under this logic AI art is real art at the point someone convinces a zeitgeist-"legitimate" (e.g. not-Elon-Musk) billionaire to buy it for an exorbitant fee. Intredasting, very intredasting, Guardian.
 
This article got a lot of reach.

OpenAI: 'Impossible to train today’s leading AI models without using copyrighted materials'​

As IEEE study shows super lab's neural nets can emit 'plagiaristic output'​

OpenAI has said it would be "impossible" to build top-tier neural networks that meet today's needs without using people's copyrighted work. The Microsoft-backed lab, which believes it is lawfully harvesting said content for training its models, said using out-of-copyright public domain material would result in sub-par AI software.

This assertion comes at a time when the machine-learning world is sprinting head first at the brick wall that is copyright law. Just this week an IEEE report concluded Midjourney and OpenAI's DALL-E 3, two of the major AI services to turn text prompts into images, can recreate copyrighted scenes from films and video games based on their training data.

The study, co-authored by Gary Marcus, an AI expert and critic, and Reid Southen, a digital illustrator, documents multiple instances of "plagiaristic outputs" in which OpenAI and DALL-E 3 render substantially similar versions of scenes from films, pictures of famous actors, and video game content.

Marcus and Southen say it's almost certain that Midjourney and OpenAI trained their respective AI image-generation models on copyrighted material.

Whether that's legal, and whether AI vendors or their customers risk being held liable, remain contentious question. However, the report's findings may bolster those suing Midjourney and DALL-E maker OpenAI for copyright infringement.

"Both OpenAI and Midjourney are fully capable of producing materials that appear to infringe on copyright and trademarks," they wrote. "These systems do not inform users when they do so. They do not provide any information about the provenance of the images they produce. Users may not know, when they produce an image, whether they are infringing."

Neither biz has fully disclosed the training data used to make their AI models.

It's not just digital artists challenging AI companies. The New York Times recently sued OpenAI because its ChatGPT text model will spit out near-verbatim copies of the newspaper's paywalled articles. Book authors have filed similar claims, as have software developers.

Prior research has indicated that OpenAI's ChatGPT can be coaxed to reproduce training text. And those suing Microsoft and GitHub contend the Copilot coding assistant model will reproduce code more or less verbatim.

Southen observed that Midjourney is charging customers who are creating infringing content and profiting via subscription revenue. "MJ [Midjourney] users don't have to sell the images for copyright infringement to have potentially occurred, MJ already profits from its creation," he opined, echoing an argument made in the IEEE report.

OpenAI also charges a subscription fee and thus profits in the same way. Neither OpenAI and Midjourney did not respond to requests for comment.

However, OpenAI on Monday published a blog post addressing the New York Times lawsuit, which the AI seller said lacked merit. Astonishingly, the lab said that if its neural networks generated infringing content, it was a "bug."

In total, the upstart today argued that: It actively collaborates with news organizations; training on copyrighted data qualifies for the fair use defense under copyright law; "'regurgitation' is a rare bug that we are working to drive to zero"; and the New York Times has cherry-picked examples of text reproduction that don't represent typical behavior.

The law will decide​

Tyler Ochoa, a professor in the law department at Santa Clara University in California, told The Register that while the IEEE report's findings are likely to help litigants with copyright claims, they shouldn't – because the authors of the article have, in his view, misrepresented what's happening.

"They write: 'Can image-generating models be induced to produce plagiaristic outputs based on copyright materials? ... [W]e found that the answer is clearly yes, even without directly soliciting plagiaristic outputs.'"

Ochoa questioned that conclusion, arguing the prompts the report's authors "entered demonstrate that they are, indeed, directly soliciting plagiaristic outputs. Every single prompt mentions the title of a specific movie, specifies the aspect ratio, and in all but one case, the words 'movie' and 'screenshot' or 'screencap.' (The one exception describes the image that they wanted to replicate.)"

The law prof said the issue for copyright law is determining who is responsible for these plagiaristic outputs: The creators of the AI model or the people who asked the AI model to reproduce a popular scene.

"The generative AI model is capable of producing original output, and it is also capable of reproducing scenes that resemble scenes from copyrighted inputs when prompted," explained Ochoa. "This should be analyzed as a case of contributory infringement: The person who prompted the model is the primary infringer, and the creators of the model are liable only if they were made aware of the primary infringement and they did not take reasonable steps to stop it."

Ochoa said generative AI models are more likely to reproduce specific images when there are multiple instances of those images in their training data set.

"In this case, it is highly unlikely that the training data included entire movies; it is far more likely that the training data included still images from the movies that were distributed as publicity stills for the movie," he said. "Those images were reproduced multiple times in the training data because media outlets were encouraged to distribute those images for publicity purposes and did so.

"It would be fundamentally unfair for a copyright owner to encourage wide dissemination of still images for publicity purposes, and then complain that those images are being imitated by an AI because the training data included multiple copies of those same images."

Ochoa said there are steps to limit such behavior from AI models. "The question is whether they should have to do so, when the person who entered the prompt clearly wanted to get the AI to reproduce a recognizable image, and the movie studios that produced the original still images clearly wanted those still images to be widely distributed," he said.

"A better question would be: How often does this happen when the prompt does not mention a specific movie or describe a specific character or scene? I think an unbiased researcher would likely find that the answer is rarely (perhaps almost never)."

Nonetheless, copyrighted content appears to be essential fuel for the making of these models function well.

OpenAI defends itself to Lords​

In response to an inquiry into the risks and opportunities of AI models by the UK's House of Lords Communications and Digital Committee, OpenAI presented a submission [PDF] warning that its models won't work without being trained on copyrighted content.

"Because copyright today covers virtually every sort of human expression – including blog posts, photographs, forum posts, scraps of software code, and government documents – it would be impossible to train today’s leading AI models without using copyrighted materials," the super lab said.

"Limiting training data to public domain books and drawings created more than a century ago might yield an interesting experiment, but would not provide AI systems that meet the needs of today’s citizens."

The AI biz said it believes that it complies with copyright law and that training on copyrighted material is lawful, though it allows that "that there is still work to be done to support and empower creators."

That sentiment, which sounds like a diplomatic recognition of ethical concerns about compensation for the arguable fair use of copyrighted work, should be considered in conjunction with the IEEE report's claim that, "we have discovered evidence that a senior software engineer at Midjourney took part in a conversation in February 2022 about how to evade copyright law by 'laundering' data 'through a fine tuned codex.'"

Marcus, co-author of the IEEE report, expressed skepticism of OpenAI's effort to obtain a regulatory green light in the UK for its current business practices.

"Rough Translation: We won’t get fabulously rich if you don’t let us steal, so please don’t make stealing a crime!" he wrote in a social media post. "Don’t make us pay licensing fees, either! Sure Netflix might pay billions a year in licensing fees, but we shouldn’t have to! More money for us, moar!"

OpenAI has offered to indemnify enterprise ChatGPT and API customers against copyright claims, though not if the customer or the customer's end users "knew or should have known the Output was infringing or likely to infringe" or if the customer bypassed safety features, among other limitations. Thus, asking DALL-E 3 to recreate a famous film scene – which users ought to know is probably covered by copyright – would not qualify for indemnification.

Midjourney has taken the opposite approach, promising to hunt down and sue customers involved in infringement to recover legal costs arising from related claims.

"If you knowingly infringe someone else’s intellectual property, and that costs us money, we’re going to come find you and collect that money from You," Midjourney's Terms of Service state. "We might also do other stuff, like try to get a court to make you pay our legal fees. Don’t do it." ®
 
There was a study that showed that AI s infringement.

AI Training is Copyright Infringement

Press Release: A computer scientist and a legal scholar shed light on the black box of processing steps in AI training - for the first time on this scale.​

The presentation of the interdisciplinary study “Copyright & Training of Generative AI - Technological and Legal Foundations” took place today in the European Parliament.

In spring, the Copyright Initiative commissioned Prof. Dr. Tim W. Dornis (University of Hannover) in collaboration with Prof. Dr. Sebastian Stober (University of Magdeburg) with a tandem expert opinion on the technological and legal aspects of training generative AI models. Their interdisciplinary research provides urgently needed new insights into the technically necessary intermediate steps in the training of generative artificial intelligence. For the first time on this scale, a computer scientist and a legal scholar are jointly creating evidence regarding the processing steps in AI training. During the event, many open questions about protected materials were answered in a well-founded, reliable manner and in line with the current state of the art.

The work of Prof. Dornis and Prof. Stober focuses on the copyright assessment of the processing of protected material in AI training:
“As a closer look at the technology of generative AI models reveals, the training of such models is not a case of text and data mining. It is a case of copyright infringement – no exception applies under German and European copyright law,” says Prof. Dornis. Prof. Stober explains that “parts of the training data can be memorized in whole or in part by current generative models - LLMs and (latent) diffusion models - and can therefore be generated again with suitable prompts by end users and thus reproduced.”

Axel Voss, MEP and host of today's event in the European Parliament, expressly thanks the scientists Dornis and Stober and is pleased that
“the study not only proves that the training of Generative AI models is not covered by text and data mining, but that it also provides further important indications and suggestions for a better balance between the protection of human creativity and the promotion of AI innovation.”

“This study is explosive because it proves that we are dealing with large-scale theft of intellectual property. The ball is now in the politicians' court to draw the necessary conclusions and finally put an end to this theft at the expense of journalists and other authors,”
commented Hanna Möllers, legal advisor to the DJV and representative of the European Federation of Journalists (EFJ).

Katharina Uppenbrink, Managing Director of the Initiative Urheberrecht, emphasizes:
“It is a groundbreaking result if we now have proof that the reproduction of works by an AI model constitutes a copyright-relevant reproduction and, in addition, that making them available on the European Union market may infringe the right of making available to the public.”
The composer and spokesperson for the Copyright Initiative, Matthias Hornschuh, comments:
“There would be a new, profitable licensing market on the horizon, but no remuneration is flowing, while generative AI is preparing to replace those whose content it lives from in its own market. This jeopardizes professional knowledge work and cannot be in the interests of society, culture or the economy. All the better that the authors of our tandem study provide the technological and copyright basis for finally turning the legal consideration of generative artificial intelligence from its head to its feet.”
 

Attachments

Last edited:
There was a study that showed that AI s infringement.
It is a case of copyright infringement – no exception applies under German and European copyright law
oh, phew, for a second i thought they might be referencing laws or countries that were actually important

keep your anti-business and anti-free speech fingers in your own rotten pies, europeans
 
If AI art were to ever get as good as NI* art, and put professional artists out of work, at least a silver lining to that would be that there could still be people who make art for fun... and aren't so fanatical about copyright (unlike many artists seem to be).

* natural intelligence
 
Imaging having the word 'prof' in your introduction yet being this retarded
Prof. Dornis. Prof. Stober explains that “parts of the training data can be memorized in whole or in part by current generative models - LLMs and (latent) diffusion models - and can therefore be generated again with suitable prompts by end users and thus reproduced.”
It's just obvious bullshit if you know anything about AI. Please show me an actual example of an AI actually reproducing any of the training material. That just doesn't fucking happen. Even in the loras I've personally trained no result looks like any of the sample images despite loras specifically being to make things that closely resemble the training images. People keep spouting this retarded idea but none of them actually have any proof. Most of the time they tell the AI to make something that looks like the bloodborne poster and then will act surprised that it can make something that roughly looks like the bloodborne poster. Then you look closer and it looks nothing alike. Most people point to this study which anyone with eyes can tell is obvious shit. This study that claims to show how AI can 'reproduce' training images, yet even in the study itself they couldn't get an actual replica. The 'copies' are very clearly not the same, they look similar, but you can't be surprised that when you ask for an image of a shoe it looks like a shoe.

Also copyright is for faggots anyway. Idc if I'm infringing on the intellectual property rights of fucking disney, telling me that disney will be hurt by my AI use only makes me want to use it more. It's amazing to me that AI has made people completely 180 on copyright issues, normally people say that it's smothering creativity (which is definitely does) but now everyones rulefagging over not being mean to big mouse company. Especially when most anti ai people are leftists who are normally against disney and their shit.
 
Please show me an actual example of an AI actually reproducing any of the training material.
In case it's not clear what's happening here: @github
's Copilot "autocompletes" the fast inverse square root implementation from Quake III — which is GPL2+ code. It then autocompletes a BSD2 license comment (with the wrong copyright holder). This is fine.
1726439183602.png
1726439209931.png
1726439241611.png
 
Even those images are not the same though, it did not spit out a training image. You could reasonably create especially the joker one with make up and a bit of time. They look similar but if you ask something to create an image of the joker poster you can't be surprised when it makes something that resembles it. If you tell the AI to take a popular character and make her sit in a forest and play a guitar then obviously it will make something similar, it didn't make something identical though, the backgrounds are completely different. But it didn't recreate the original training image. It got close yea, but it didn't recreate it. I'm not saying that AI can't create things that look like certain things if you try hard enough, but they will never spit out an exact copy of an image they were trained on. The AI isn't spitting out training images, it's creating an image that is similar to your prompt, you just used a prompt that described a poster or cutscene from a game.

It's kinda hard to read the article because of a retarded paywall. But from what I can see it looks like they're trying to say that generating something that resembles the original should be illegal? Idk why allowing people to generate shit like that should be copyright infringement, if the company was trying to sell those images then 100% I could see why you'd sue them but saying that they should be sued because their program can make an image that resembles something else is just retarded. That's one step off of saying adobe should be sued because photoshop can make fake porn of celebrities. There are artists out there on twitter that take commissions and would draw you an image of joker or ellie, never heard of any of them being sued for directly profiting off of copywritten characters in that case. Unless you look at Japan who have even worse copyright laws that no one should want to recreate. Same thing for their idea that training something on copywritten characters should be illegal, no one cared for the millions of people that have learned to draw from drawing copywritten characters, it's just more hypocritial shit again. The only problem I could possibly see is because midjourney doesn't let you run it locally for free. If you were selling those images then yea I'd agree with someone sueing you, but saying that the company created the generator that created it should be sued is just retarded, the company isn't profiting off of joker or the last of us solely, the AI isn't designed to recreate copywritten shit, it's just designed to take inspiration from a vast library of publicly accessable media.

Idk about the github thing because I've never used that. Then again 99% of coding is just copy pasting already anyway.
 
Has the seething accomplished anything? From what i remember alot of companies are just using it openly and giving some faint excuse if that, places like anime dubbing people are just outright cheering for ai to replace the incestuous va communities, and i think even the places that totally dont allow it ... basically do just allow it. i ask because usually resisting change theres usually some setbacks and victories for the luddites but i havent really seen that here and now theyre just openly talking about how theyre probably going to be able to replace game devs soon and iirc are already using ai heavily in games.

its almost like thinking you could alienate everyone in your little niche and tell the peasants to suck it up when your policies replaced them inhumanely was a bad idea that left you all alone out on your fucking asses huh?
Imagine how ridiculous and petty it sounds when a human artist starts complaining about someone else stealing their style.
you know that actually missed me but yeah people draw in each others styles all the time. so that part of it isnt even arguably theft. like you cant be sued for drawing in chibi or animating rubberhose style. holy fuck their complaints are so petty. anything to make it a moral issue instead of just showing some humility, admit "learn to code" was a shitty response to bill being fired, and admit they needed to show more basic fucking humanity to people
 
Has the seething accomplished anything?
Not really, because these people are histrionic personalities. There would be quite a few good arguments they could make to turn this more into an discourse about possible limitations on AI but they absolutely refuse to know their enemy, keep making the same nonsensical arguments and just can't stop shitting and pissing themselves while stomping with their feet on the ground and asking for all progress to stop (lol, as if that's ever gonna happen) and everything to be banned and all harddrives that ever contained an image model to be burned OR ELSE, who can take that seriously? The side of the artists in this particular thing is just so completely impossible to sympathize with for the average person because of their behavior. That these people always acted and continue to act like assholes to everyone who's not them doesn't really help their case either, as people not involved are hard pressed to even care.

My guess is eventually the hysterics will be over and the smart ones will realize it's either sink or swim. Many will sink anyways. That's life. You didn't possibly believe drawing pictures on the internet could be a stable work choice, did you?
 
(warning potential sperging)
Understandable
OH MY FUCK!!! why artists on the internet acting more pathetic each YEAR its just to show many people here do not deserve to be in the industry and they start making strawman video like THIS OR SOMETHING IN LINE!! its EXHAUSTING!!!
It's actually something that's been happening for a while:
Artistry being less and less special as more people having access and time to learn or fuck around.
A lot of artists buy into the "exclusivity" of art and the corresponding idea that "They know the true value" not the people they need to pay for it..
While there's overlap, they're operating like modern feminists.
And just like them, the consequences are setting in

This is from a lurker from art circles
 
There's another AI redraw trend going around again, as expected retards came back out again to seethe about it. Here's the original post that people are redrawing. And here's the post that seems to have started the trend? I think at least. The original has 215k likes and the redraw 140k. Honestly the AI one has a better style imo, the redraw (and all the other redraws) are just generic anime style; they look decent but not as stylised even though I thought the AI was meant to be the one copying other people's styles. I won't say that the redraw looks bad, it's still pretty good.

miku.PNG

mike 2 - Copy.PNG

However as always some speds think that redrawing AI is somehow as bad as using it in the first place (literally murdering artists to death with a gun).

retard 1 - Copy (2).PNG



I just want to ask who this was stolen from? If it's theft you can point to the person who was deprived of something they owned right? I'll kinda agree with them though, redrawing it is part of the 'problem' because you're signal boosting the AI stuff and showing people that AI has come a long way since dall.e shit. Also as a thought experiment; if I was to leave a generator running that created a few million AI images of Miku and then I posted them online, would people be unable to draw her ever again? I could just point at an AI image I made that way for any other Miku image and say that someone was 'referencing' my AI images. But if I'm against AI then why should I not steal their ideas? I thought that it was totally ethical to 'steal' AI art because the AI doesn't own it? I'm 'against' a fair few game companies, that's why I just pirate their shit. And the last post is just retarded internet grown autism again. "I'm going to shame my human artist friends for doing human art" is maybe not going to make them hate AI, it will make them hate you instead. Why would you shit talk your friends for drawing something? Is it bad for them to take inspiration from an AI post? Maybe they just liked the cool design and wanted to make their own version? Maybe they're just a normal person with agency over their life doing what they enjoy? Follow policing (or whatever this would be called) is possibly the most pervasive autistic shit online, let people do what they want, they do not owe you anything, you do not get to choose what other people do or don't do, only the person themself gets to decide that. It just screams the same thing as the last one of these I posted here a month or so ago where the person just seems jealous.

Then there's some posts from someone I follow and enjoy their style, unfortunately they speak English so it's hard to ignore their bullshit unlike the Japanese people I follow. I wish there was a twitter extension that blocked all solely text posts if they were English.


retard 2 - Copy (2).PNG

And this just screams jealous even more than the last. Please tell me, how is the original 'sloppy'? It looks good, denying that doesn't make AI look bad, it makes you look jealous and delusional. It's also not being redrawn because it's Miku; remember a while ago when people were redrawing the cat and girl AI photo? People aren't redrawing it because it's Miku, they're redrawing it because it's a cool design and they want to draw it. Or they want to draw a pose they haven't before because they want practice, or they want to challenge themselves, or once again because they have agency over their life. Some are doing it just to show how much they can epically pwnage the AI chuds, and producing something that looks worse than the AI too. I also doubt that people are thinking they will actually hurt the original poster by redrawing the image, they probably just want attention, or are using it as practice. I also wouldn't really call it fanart, even the best redraw (above) is more or less a straight copy of the original, that's hardly fanart. Though there have been some people to draw something that would be fanart of the design instead of tracing the original. Also worth noting that the original person never asked for anyone to redraw what they made. They just wanted to share something cool they made. And pretending you don't want to guilt trip people who participate while doing exactly that is just retarded, please take an autism test your words are not matching your obvious feelings. And maybe the fact that this isn't the first nor the last time "this" trend goes around should be telling you something, maybe you should read the room and realise that most people simply don't give a shit about your soapboxing. Maybe people are just seeing a cool design and drawing it themselves; like twitter artists always have done and will keep doing regardless of your bitching.

"I have actively started to harm real artists by unfollowing and no longer liking their posts thus limiting their reach because I care about real artists" is one fucking hell of a cognitive dissonance. And most Miku artists I see are not young, not really important, but claiming that they are just retarded and that's the only reason they would partake in this trend is just more autism, shame my grandma died she would have loved to do a jigsaw from all these puzzle pieces. Not everyone shares the same views as you, acting as though anyone with different thoughts compared to you are 'young' and uninformed is only ever going to make you look closeminded and autistic. People redrawing the original are not actively harming anyone by doing so, realistically they are having 0 impact on AI usage too, yet don't you dare do something you enjoy because a random person on twitter says it's bad. Also another person trying to say a good looking image is slop yet again. Thought I guess coping and seething do traditionally go together pretty well.

uber retard - Copy.PNG

So not crediting people is now ok? The design of the original is new, the person using the AI created it, that's their design, even if they didn't draw it. If I create a design and pay someone to draw it for me in a quality better than I could do; does that artist now own the design I made? Does the Chinese child slave own the gucci designs because he's the one who prints the shirts? Is it now ok to not credit the creator of a design just because you don't like how they created it? Can I do the same to all digital artists because I prefer oil paintings to digital art? And are they not drawing their own Mikus anyway? They are drawing Miku, or should the 20-50 people I've seen redraw the AI image all create their own completely original design? And were you saying the same thing when the millionth person drew the base canon outfit too? Or do you only care about new designs now twitter has told you to?
rthhrrgvefgrghjmgf - Copy.PNG

More retarded policing. People are free to do what they want. But "commissions"? The guy was never going to commission someone to draw that for him. He didn't get "free" commissions because he was never going to pay for them anyway. Also you're on fucking twitter my guy, the amount of times people post random clothing (or even other people's art uncredited just like the recent trend) and ask people to draw characters in that outfit. Did the person stealing a random clothing png from google images and making a single tweet do more work than the person using an AI generator? From experience definitely not. Most of the redraws aren't even higher quality in the first place either.
brain damage - Copy.PNG

So I can copyright an AI design for myself then? But I thought AI steals? You're telling me that I can redraw something that was 'stolen' and now I own the copyright? Would the person who the AI 'stole' from originally not take issue with that?
i thought it was stolen though - Copy.PNG

Also once again, this is a fucking Miku trend. The people talking about copyright are just retarded. YOU do not own any copyright to any Miku design. Crypton owns the copyright to her as a character. I can't put Mickey mouse in a maid dress and start selling merch of that and pretend that I won't get any legal issues from it.

This is a reply to a (now deleted) comment that was seemingly praising the image.
japan - Copy.PNG

Have you ever used google translate before? Because it's fucking shit. "studying" might be the closest translation, but the Japanese word might be different in actual meaning. Also pretending that he isn't learning is retarded. He is very clearly learning how to best use the AI, so is anyone who uses AI. You learn what does and doesn't work, you learn how to best make a sparkling image. That's something you have to learn from experimentation and understanding the program. You might even say you have to study it. Also in another comment the guy said he edits his images by hand to remove mistakes, it could be entirely possible that's what he means when he (supposedly) says 'studying'. Getting pissed at a google translated message is just retarded. Or maybe it's not, I've been pissed off at google translate a decent amount, though I was trying to use it myself and getting a shitty output.

There was also this guy which I just find kinda funny. Considering most digital artists use at the bare minimum the erase tool, and most likely many digital only tools, it is kinda hypocritical. I cropped out the retarded rick and morty gif because that wasn't funny though.
based lol - Copy.PNG


There's also a large portion of people trying to 'expose' him for using AI to create the image. I don't know why they think it's some secret knowledge. Something tells me that "pon_pon_pon_AI" with the bio of "The illustrations were created with Nijijourney+Stable diffsion(AI)+Clip studio Edit" is going to be exposed by your comments. The guy makes it about as clear as possible that he's using AI.

I am aware that it would have been better and quicker to create a new alt account and just commented kys faggot under those posts but too late for that.

As always another kinda cool trend (if not kinda cringe from the people posting lower quality shit and pretending they made something better than the AI) that has caused totally well adjusted people to seethe again for a reason that I just don't really understand at all and go authoritarian and demand people stop doing the trend because uhh AI bad? Also I can't say for sure because the guy didn't say anything, but the original guy shared some of the redraws by human artists, he probably just thinks they're cool. The whole anti AI spat has made me think about the whole 'impersonation is the highest form of flattery' quote more in the past year than the rest of my life combined. Kinda funny that most of these trends happen to good images though. I don't think I've seen any of these trends where the image being redrawn looks like shit or has obvious problems with it. If you want to make AI look bad don't share the super polished Miku image, let me give you my images folder that's full of mistakes and schizophrenic hallucinations.

I just wanted to say compare the comments on the original post. Almost every Japanese comment is positive. Almost every English comment is negative. I've noticed this too. If I had to put my money on it it's just American autists again. This type of shit just spreads like a social contagion on twitter where people have been groomed into being autistic and so it spreads. But that spread gets blocked by the language barrier and so it's mostly contained within English speaking circles. Plus with most people, especially in Japan, looking down on Americans it's no surprise that they don't care about their moralfagging. I will say that the gyrupeon person is Polish or something though, just to be thorough.

Just let my yellow nigga enjoy the vegetable juice woman; he did nothing wrong and made something pretty.
 
more tumblrina ''artist'' whining about how ai art is EVUL
View attachment 6367387View attachment 6367390
it funny they use a/i like it voldemort name. Also nevermind they aren't against ''borrowing without permission'' they're just against ai... bunch of hypocrite
> Using AI for yourself with no one ever knowing about it still somehow encourages the use of it
These fuckers all believe in spooky action at a distance and I fucking hate it
 

While watching web cartoons at midnight, I gave this a watch and unfortunately it's perfect for this thread. The story is about a witch named Sam having to deal with an ambitious elf(?) named El who wants to learn magic. After Sam shows El that magic in this world is just chemistry/cooking (after a very out-of-nowhere trama dump from the two) she lets him help with her spell.

The spell she was commissioned to cast was a spell to grow pumpkins and with El's help had the pumpkins turn blue. It's revealed that the person who commissioned her for the spell had a robot recording her movements and copies what she put in the spell. Sam reacts by ripping off the ending of a Rick and Morty episode and gives the robot a bomb recipe. She returns home and though she knows that the robot wasn't the end of it, decides to figure out how El made the pumpkins blue.

The big problem of the message is that the antagonist uses automation to copy her spell, not A.I. If this was about A.I the antagonist should have made the cauldron what steals her spell and have it be like actual A.I models and language learning and have a secret distiller under it that extracts a concentrate reserve of the spell and the antagonist makes synthetic slop copies of it or break down and use for other spells or some shit. (Actually, why couldn't the A.I broseph use the free public domain spell book PDF to train the robot? Why not use a copy/mimic spell instead of a robot? Why the fuck does he even show and tell Sam about the robot instead of just keeping it a secret?!)

It could have been a universal allegory of other disenfranchised skills and pillars of civilization like how the carpenter and blacksmith were replaced with government contractors, or the Butcher with the USDA, how most cars now look the same and full of plastic and computer wire and bluetooth impaled in the engine,pure cane sugar replaced with high fructose corn syrup, flour and wheat with corn starch, beef tallow with trans fats, cheese with pasteurized proccesed sandwich slices, etc.

The creator of this cartoon could have used the end of magic be the end of Agriculture, Culinary, Infrastructure, Forgery and Welding, Auto-mechanics, the uppercase Internet but no, all that matters to the creator is how someone can type "silly cartoon mascot for Heroin" and get a picture of a silly cartoon mascot for Heroin and how that is the worst thing since SOPA that will kill the Internet and information and Culture.
 
Last edited:
Back