Class Action Filed Against Stability AI, Midjourney and DeviantArt for DMCA Violations, Right of Publicity Violations, UnlawfulCompetition, TOS Breach

---------------------------------------------------------------------------
SAN FRANCISCO, Jan. 14, 2023 /PRNewswire/ -- Stability AI Ltd.; Stability AI, Inc.; Midjourney Inc.; and DeviantArt, Inc. have created products that infringe the rights of artists and other creative individuals under the guise of alleged "artificial intelligence." The Joseph Saveri Law Firm, LLP—a leading class action firm with offices in California and New York—along with Matthew Butterick, and Lockridge, Grindal, Nauen P.L.L.P. have filed a lawsuit in the United States District Court for the Northern District of California on behalf of a class of plaintiffs seeking compensation for damages caused by Stability AI, DeviantArt, and Midjourney, and an injunction to prevent future harms. The lawsuit alleges direct copyright infringement, vicarious copyright infringement related to forgeries, violations of the Digital Millennium Copyright Act (DMCA), violation of class members' rights of publicity, breach of contract related to the DeviantArt Terms of Service, and various violations of California's unfair competition laws.

As alleged in the Complaint, Stable Diffusion is an artificial intelligence product used by Stability AI, DeviantArt, and Midjourney in their AI image products. It was trained on billions of copyrighted images contained in the LAION-5B dataset, which were downloaded and used without compensation or consent from the artists. If Stable Diffusion and similar products are allowed to continue to operate as they do now, the foreseeable result is they will replace the very artists whose stolen works power these AI products with whom they are competing. AI image products are not just an infringement of artists' rights; whether they aim to or not, these products will eliminate "artist" as a viable career path. In addition to obtaining redress for the wrongful conduct, this lawsuit seeks to prevent that outcome and ensure these products follow the same rules as any other new technology that involves the use of massive amounts of intellectual property. If streaming music can be accomplished within the law, so can AI products.

"As burgeoning technology continues to change every aspect of the modern world, it's critical that we recognize and protect the rights of artists against unlawful theft and fraud," said Joseph Saveri, founder of the Joseph Saveri Law Firm, LLP. He continued, "This case represents a larger fight for preserving ownership rights for all artists and other creators."

"AI needs to be fair and ethical for everyone," said lawyer/programmer Matthew Butterick. "But Stability AI, Midjourney, and DeviantArt are appropriating the work of thousands of artists with no consent, no credit, and no compensation. As a lawyer who is also a longtime member of the visual-arts community, it's a pleasure to stand up on behalf of fellow artists and continue this essential conversation about how we the people want AI to coexist with human culture and creativity."

Since its founding in 2000, DeviantArt had grown to be a haven for artists of all stripes. A core aspect of participating in the DeviantArt community for artists is the practice of sharing digital images of their artwork. Today, DeviantArt bills itself as "the world's largest art community," hosting millions of images. At the same time, it offers DreamUp, a product that unlawfully infringes on the rights of its own art community. To add insult to injury, a large portion of the training data for Stable Diffusion—which powers DreamUp—was made up of images scraped from DeviantArt without permission from the artists that posted them.

For more information, please see our case page www.saverilawfirm.com/ai-art-generators-copyright-litigation and our case website stablediffusionlitigation.com.

ABOUT THE FIRMS

The Joseph Saveri Law Firm is one of the country's most acclaimed, successful boutique firms, specializing in antitrust, class actions, and complex litigation on behalf of national and international consumers, purchasers, and employees across diverse industries. For further information on our practice and accomplishments on behalf of our clients, please visit www.saverilawfirm.com or call us at (415) 423-1799.
Lockridge Grindal Nauen P.L.L.P. has served clients throughout the Midwest and in Washington, D.C. for more than 40 years. It has extensive experience in local, state, and federal government relations as well as antitrust, business, campaign finance, consumer, data breach, governmental, health care, employment, environmental, ERISA, intellectual property, real estate, securities, and tribal law litigation. For further information, please visit https://www.locklaw.com/.

ABOUT MATTHEW BUTTERICK

Matthew Butterick is a lawyer, programmer, designer, and writer. He has been professionally involved with open-source software since 1998. His books Typography for Lawyers (typographyforlawyers.com) and Practical Typography (practicaltypography.com) are relied on daily by lawyers and writers worldwide. For more information, please visit https://matthewbutterick.com.
---------------------------------------------------------------------------
 
You keep saying the word, but you keep denying my point.

State your point clearly then. It sounds to me like you are trying to claim that if you take some data provided to you under license and apply a transformation to that data, you are no longer bound by the license. This is generally false. I don't actually know under what conditions this might be true. Is this not your argument? Do you know of any established examples where transforming data absolves you of complying with the T&C?

The "fancy nonlinear function that's fitted to a bunch of data points" is an inferior mathematical representation of natural human creative processes-- that's my point.

It's not how your brain works at all. Not even a little bit. There's a reasonably thorough description of how things work and the case they're making at the lawsuit's website:

What the lawsuit boils down to is arguing that

  1. Stable Diffusion stores and distributes copyrighted works in a highly compressed format (the "latent images")
  2. It creates new images by blending the compressed format, then applying antidiffusion operators to the blend to reconstruct the new image
  3. The output is therefore in every sense a derivative work of the inputs and a violation of copyright.
There's also a claim that DeviantArt violated its privacy policy by providing images in bulk to Stable Diffusion.
 
Last edited:
This is why you look at their portfolios and arrange a way to not pay in full before you see a sketch of what they're intending.
I'm not a rookie, I know how to commission artists. I did check his portfolio, but stable diffusion didn't exist in early 2021, and I paid him 33% after the rough sketch, 33% after the flat colors. The art was good. Not amazing, but good. Then he vanished, and came back a year and a half later saying he lost the picture, and after I pestered him for two months he said "I did this replacement" - Said replacement was the AI generated picture.
 
Yeah, these people should fuck off.

Rembrandt, 1655 / Soutine, 1924
View attachment 4277325View attachment 4277335

View attachment 4277348View attachment 4277349

Velasquez 1653 / Picasso 1957
View attachment 4277354

Van Gogh 1890 / Dore 1872
View attachment 4277364

All those tantrums are pure jealousy. They're mad that they aren't as unique and talented as they think they are. The big masters of art through history did things nobody else did before, even if they copied others. Picasso and Van Gogh added their own style, which then was new and original on itself and became something new. That's why they are copied or inspire others until today.

These modern "artists" will never be this groundbreaking and they resent when someone with equal talent shows up. They are also not very original. Or their work is basically fanart, whose copyright is already debatable on itself. There was a good thread about how good artists are still around painting original stuff
View attachment 4277387View attachment 4277390

These people aren't the ones complaining about AI. Jee, wonder why.
I'm going to have to agree with this as well as MHO.

I do not think that there will be much to this. I say why is because WHERE the filing took place.
They Filed in CA to get the best chances on a ruling and if appealed it will go to the 9th circuit which is also liberal. Then it will be shot up to Supreme and they will most likely slap it down.

Transformative and Derivatives have already been hashed out in court. Posting your artwork on the Internet is something every artist should think about. Once there it is pretty much free reign on people making copies of that content... And as long it is not a direct lifted complete copy of the original and claim it to be their own works it should be fine. I just do not see they have much to stand on.

There is so much here that again you need an IP attorney to talk too. and I do that as well.

In my case I do not fully post my IP's online. From a business stand point I ALWAYS ask my content providers permission to show their content to others. And I always tell my providers NOT to show the artwork, content create for me unless stated otherwise and/or several years have past.

I understand both sides of the argument. But considering how BAD the mindset of artists in general are in the US. I currently do not bother with them because of their GOD DAMNED ENTITLEMENT.

I can not hire them and I have tried... so... MY attitude towards that woke community of REEEE is... GO... FUCK... YOURSELVES...

Maybe if you did not have such a fucking fat head and keep of a god damned schedule, this would not be happening.

AND FUCKING PLEASE... YOU MAYBE GUD WITH YOUR FUCKING WACCOOM TABLET BUT I'LL HIRE A PERSON WITH A PENCIL/INKS/PAINTS OVER YOU EVERY SINGLE TIME.

I know how good artists can be... Because I used to be one of them.
 
State your point clearly then.
By your own description, it's literally "transformative".

It sounds to me like you are trying to claim that if you take some data provided to you under license and apply a transformation to that data, you are no longer bound by the license.
Can you clearly describe which data was used in any generated image? Likely not. Similarly, an individual can't immediately and completely account for the weights of their various influences when creating art of their own. This is different from buying an image and changing it only to the point that it can still be recognized that it started out from the image you bought.

What the lawsuit boils down to is arguing that

  1. Stable Diffusion stores and distributes copyrighted works in a highly compressed format (the "latent images")
  2. It creates new images by blending the compressed format, then applying antidiffusion operators to the blend to reconstruct the new image
  3. The output is therefore in every sense a derivative work of the inputs and a violation of copyright.
You have literally described transformation covered by fair use. It autonomously transforms arbitrary subsets of image data into various other images such that they're in a different "state" than what they were originally and individually.

Not unlike what humans do given that we produce some product of our experiences through ambiguous means.

Shoot, I know SD didn't take in any pictures of hands if it's making the abominations you showed me in that picture of Karl Marx you commissioned. Maybe it's fair use not just because it's fundamentally transformative, but because it's satirical parody meant to send the message that communists are barely not human.
 
Last edited:
As much as I want to say fuck off to SD and DA... (already giving into censors and corporate prudish hysteria) this is pretty fucked up and dangerous. Worst yet I fully see the MSM and big business going right along with it since they have already showed signs of viewing it as a "dangerous" means of loss of content control.

As for the claims themselves here.. Utter nonsense! Using references to train AI is no different than artists using and recreating other people's art and styles as inspiration. Unless the images are actually producing copies of other artists art 1:1, I don't see any basis to this unless we start "protecting" art styles now too.

Also a reminder that the current CR system needs to be burned to the ground! CR was never meant as a long or even medium term protection to even specific works! (in detail stories etc) We have Disney and modern big media/business to thank for its current (very modern) perversion into a de-facto permanent right to exclusivity. With the ultimate irony being that Disney made it's name (and money) off of other people's stories no longer protected by CR. (protections that those same stories would STILL be under with current CR laws)



Lol. We're at the point in clown world where math is going to become illegal and along with it fanart and hentai unless you're one of the 3 dudes in the world who actually draw OC.

I don't expect the case to rule in these faggots favor but the possibility of the aforementioned thing coming true is now possible and when this doesn't rule in their favor I expect these fags to somehow end up with a large ass warchest to launch endless legal harassment campaigns to fuck with these projects.

Yup..

Those are my fears as well. What a shitshow!
 
Last edited:
The funny thing I find about the whole hooplah is image generation is just a toy these researchers released to help fund continued research. I doubt any of them really care about the ethics and morals these "artists" screech about. They have bigger goals in their creation of a human-like sentience in AI. I'm personally of the mind that digital is the next step in human evolution in some capacity no matter how loud the yelling is today, probably not in any of the older farmer's lifetimes but maybe in some of them.

Side note, relevant furry image meme; that community most impacted:

AI and Furry Artists Be Like.png
 
Last edited:
An AI can produce copies of items in the training set, too, which is why Github is also being sued over Copilot.
For images, they generally don't reproduce any exact matching sets of pixels from the original. Code is one dimension below images and thus recognizable patterns crop up a lot more frequently. Though in that same way, two different programmers are much more likely to
Basing models on other people's data for commercial purposes is a legal minefield even when your model is just boring bullshit like a linear regression fit. If I count the butterflies I see each day during the month of June, and I publish it, and I specify that it's for non-commercial use only, you'd better not use my butterfly numbers in literally anything you do that is commercial at all, from publishing a book of butterfly counts, to selling consulting services to help autistic retards predict butterfly numbers, to developing the Butterfly Predictor 9000 at your job. There is a wealth of information on the internet, but you have to be really careful about how you use it.
Data, facts and figures, are absolutely not copyrightable in the US. A particular representation of data sometimes is, but the data itself never is. (And even then, if it's just a table of numbers, it's likely to be ruled that the creative threshold hasn't been reached and that two different people probably would've come to basically the same representation.)

This is actually a big deal and big business for things like sports statistics. Algorithmic gambling needs high quality sources of data. So for example, if you wanted to create an algorithm to bet on baseball, the MLB's player statistics cannot be copyrighted. They're just numbers. So instead there's all kinds of little services that will serve up MLB stats. They cannot defend their numbers through copyright, so they have to resort to technical measures, licenses and sneaky shit like poisoning torrents.

The problem with licensing is that they can only go after the individual they licensed with. If someone breaks the license and tosses the data up on a torrent site, then the service provider can't do shit beyond make the licensee miserable.

With your butterfly example, that might be copyrightable if you yourself put the effort into counting the butterflies.

(Of course none of this has anything to do with AI art, I'm just blabbering.)
What the lawsuit boils down to is arguing that

  1. Stable Diffusion stores and distributes copyrighted works in a highly compressed format (the "latent images")
  2. It creates new images by blending the compressed format, then applying antidiffusion operators to the blend to reconstruct the new image
  3. The output is therefore in every sense a derivative work of the inputs and a violation of copyright.
Problem with the alleged compression is that you can't uncompress it. It's a one-way function.

You can get similar images out, but none are the same (nor even fragments). Depending on the model in question, of course.

In order to have a copyright infringement, they need to be able to say "person A's work B infringes person C's work D". With any big AI model, the connection between work D and work B is hopelessly obscured.

Perhaps that's a sneaky trick or is unfair, but under current copyright law, it is what it is.
 
For images, they generally don't reproduce any exact matching sets of pixels from the original. Code is one dimension below images and thus recognizable patterns crop up a lot more frequently. Though in that same way, two different programmers are much more likely to

"They generally don't" isn't the same as "they can't." In the case of Github's Copilot, it was reproducing comments and variable names that made it easy to identify what was being copied, and prove that GitHub violated the T&Cs of the code it used to train its AI. I've seen an AI that just used a nearest-neighbor function on its training set. So yes, it produced straight-up copies of the data it was trained on. It is entirely possible for an AI to reproduce copies. This is because AIs do not actually learn like humans do, they can only apply operators to the training set. That training set can be large or small, and the operators can be sophisticated or simple, but it's why analogizing things to humans studying an artist's technique are completely invalid, and why this case is going to court.

Problem with the alleged compression is that you can't uncompress it. It's a one-way function.

The entire algorithm wouldn't work if the decompression step couldn't function.

You can get similar images out, but none are the same (nor even fragments). Depending on the model in question, of course.

You're describing lossy compression, which is not a new thing. "I can't get the original image back, pixel per pixel, so the compressed format isn't covered by copyright," is not going to fly.

1673976702830.png


In order to have a copyright infringement, they need to be able to say "person A's work B infringes person C's work D". With any big AI model, the connection between work D and work B is hopelessly obscured.

That's what's being tested in court right now. I am not a lawyer, and you are not a judge. Stable AI doesn't have to prove to you or me that redistributing latent images isn't a copyright violation because they are somehow so conceptually different from lossy compression that copyright doesn't apply. Same with proving to you or me that if you blend two images, that's a copyright violation, but if you blend 2000, it's fair game. They have to argue that in court. If the case goes to trial, that alone shows that none of your arguments that this claim is meritless are in themselves convincing.
 
Last edited:
"They generally don't" isn't the same as "they can't."
Based on what I've seen, this particular model can't.
In the case of Github's Copilot, it was reproducing comments and variable names that made it easy to identify what was being copied.
Humans often produce more or less very similar variable names and comments with similar motivations.
Yep. If I release my data to you under license, you can't turn around and break the T&Cs.
I can't, but if I pass the data on further to someone else, whoever I'm dealing with would not encumbered by the T&C. Data without a creative element cannot be copyrighted.
You're describing lossy compression, which is not a new thing. "I can't get the original image back, pixel per pixel, so the compressed format isn't covered by copyright," is not going to fly.
They're different images, is what I'm arguing. Legally unrelated images.
That's what's being tested in court right now. I am not a lawyer, and you are not a judge. Stable AI doesn't have to prove to you or me that redistributing latent images isn't a copyright violation. Same with proving to you or me that if you blend two images, that's a copyright violation, but if you blend 2000, it's fair game. They have to argue that in court. If the case goes to trial, that alone shows that none of your arguments that this claim is meritless are very convincing.
If I were to testify as an expert, I would testify that current copyright law doesn't apply to AI art models that don't reproduce the same sequences of pixels.
 
In order to have a copyright infringement, they need to be able to say "person A's work B infringes person C's work D". With any big AI model, the connection between work D and work B is hopelessly obscured.
I would like to see a 15,000 DA/tumblr artist battle royale to finally determine who owns the rights to a utterly generic anime style #7 and the pose "that cool landing after jumping from a building"
 
Back