Culture This artist is dominating AI-generated art. And he’s not happy about it. - Greg Rutkowski is a more popular prompt than Picasso.

MIT Technology Review / Archive

This artist is dominating AI-generated art. And he’s not happy about it.​

Greg Rutkowski is a more popular prompt than Picasso.

hose cool AI-generated images you’ve seen across the internet? There’s a good chance they are based on the works of Greg Rutkowski.

Rutkowski is a Polish digital artist who uses classical painting styles to create dreamy fantasy landscapes. He has made illustrations for games such as Sony’s Horizon Forbidden West, Ubisoft’s Anno, Dungeons & Dragons, and Magic: The Gathering. And he’s become a sudden hit in the new world of text-to-image AI generation.

His distinctive style is now one of the most commonly used prompts in the new open-source AI art generator Stable Diffusion, which was launched late last month. The tool, along with other popular image-generation AI models, allows anyone to create impressive images based on text prompts.

For example, type in “Wizard with sword and a glowing orb of magic fire fights a fierce dragon Greg Rutkowski,” and the system will produce something that looks not a million miles away from works in Rutkowski’s style.

But these open-source programs are built by scraping images from the internet, often without permission and proper attribution to artists. As a result, they are raising tricky questions about ethics and copyright. And artists like Rutkowski have had enough.

According to the website Lexica, which tracks over 10 million images and prompts generated by Stable Diffusion, Rutkowski’s name has been used as a prompt around 93,000 times. Some of the world’s most famous artists, such as Michelangelo, Pablo Picasso, and Leonardo da Vinci, brought up around 2,000 prompts each or less. Rutkowski’s name also features as a prompt thousands of times in the Discord of another text-to-image generator, Midjourney.
Rutkowski was initially surprised but thought it might be a good way to reach new audiences. Then he tried searching for his name to see if a piece he had worked on had been published. The online search brought back work that had his name attached to it but wasn’t his.

“It’s been just a month. What about in a year? I probably won’t be able to find my work out there because [the internet] will be flooded with AI art,” Rutkowski says. “That’s concerning.”

Stability.AI, the company that built Stable Diffusion, trained the model on the LAION-5B data set, which was compiled by the German nonprofit LAION. LAION put the data set together and narrowed it down by filtering out watermarked images and those that were not aesthetic, such as images of logos, says Andy Baio, a technologist and writer who downloaded and analyzed some of Stable Diffusion’s data. Baio analyzed 12 million of the 600 million images used to train the model and found that a large chunk of them come from third-party websites such as Pinterest and art shopping sites such as Fine Art America.

Many of Rutkowski’s artworks have been scraped from ArtStation, a website where lots of artists upload their online portfolios. His popularity as an AI prompt stems from a number of reasons.

First, his fantastical and ethereal style looks very cool. He is also prolific, and many of his illustrations are available online in high enough quality, so there are plenty of examples to choose from. An early text-to-image generator called Disco Diffusion offered Rutkowski as an example prompt.
Rutkowski has also added alt text in English when uploading his work online. These descriptions of the images are useful for people with visual impairments who use screen reader software, and they help search engines rank the images as well. This also makes them easy to scrape, and the AI model knows which images are relevant to prompts.
Stability.AI released the model into the wild for free and allows anyone to use it for commercial or noncommercial purposes, although Tom Mason, the chief technology officer of Stability.AI, says Stable Diffusion’s license agreement explicitly bans people from using the model or its derivatives in a way that breaks any laws or regulations. This places the onus on the users.

Some artists may have been harmed in the process

Other artists besides Rutkowski have been surprised by the apparent popularity of their work in text-to-image generators—and some are now fighting back. Karla Ortiz, an illustrator based in San Francisco who found her work in Stable Diffusion’s data set, has been raising awareness about the issues around AI art and copyright.

Artists say they risk losing income as people start using AI-generated images based on copyrighted material for commercial purposes. But it’s also a lot more personal, Ortiz says, arguing that because art is so closely linked to a person, it could raise data protection and privacy problems.

“There is a coalition growing within artist industries to figure out how to tackle or mitigate this,” says Ortiz. The group is in its early days of mobilization, which could involve pushing for new policies or regulation.

One suggestion is that AI models could be trained on images in the public domain, and AI companies could forge partnerships with museums and artists, Ortiz says.

“It’s not just artists … It’s photographers, models, actors and actresses, directors, cinematographers,” she says. “Any sort of visual professional is having to deal with this particular question right now.”

Currently artists don’t have the choice to opt in to the database or have their work removed. Carolyn Henderson, the manager for her artist husband, Steve Henderson, whose work was also in the database, said she had emailed Stability.AI to ask for her husband’s work to be removed, but the request was “neither acknowledged nor answered.”




“Open-source AI is a tremendous innovation, and we appreciate that there are open questions and differing legal opinions. We expect them to be resolved over time, as AI becomes more ubiquitous and different groups come to a consensus as to how to balance individual rights and essential AI/ML research,” says Stability.AI’s Mason. “We strive to find the balance between innovating and helping the community.”

Mason encourages any artists who don’t want their works in the data set to contact LAION, which is an independent entity from the startup. LAION did not immediately respond to a request for comment.

Berlin-based artists Holly Herndon and Mat Dryhurst are working on tools to help artists opt out of being in training data sets. They launched a site called Have I Been Trained, which lets artists search to see whether their work is among the 5.8 billion images in the data set that was used to train Stable Diffusion and Midjourney. Some online art communities, such as Newgrounds, are already taking a stand and have explicitly banned AI-generated images.

An industry initiative called Content Authenticity Initiative, which includes the likes of Adobe, Nikon, and the New York Times, are developing an open standard that would create a sort of watermark on digital content to prove its authenticity. It could help fight disinformation as well as ensuring that digital creators get proper attribution.

“It could also be a way in which creators or IP holders can assert ownership over media that belongs to them or synthesized media that's been created with something that belongs to them,” says Nina Schick, an expert on deepfakes and synthetic media.


Pay-per-play

AI-generated art poses tricky legal questions. In the UK, where Stability.AI is based, scraping images from the internet without the artist’s consent to train an AI tool could be a copyright infringement, says Gill Dennis, a lawyer at the firm Pinsent Masons. Copyrighted works can be used to train an AI under “fair use,” but only for noncommercial purposes. While Stable Diffusion is free to use, Stability.AI also sells premium access to the model through a platform called DreamStudio.

he UK, which hopes to boost domestic AI development, wants to change laws to give AI developers greater access to copyrighted data. Under these changes, developers would be able to scrape works protected by copyright to train their AI systems for both commercial and noncommercial purposes.

While artists and other rights holders would not be able to opt out of this regime, they will be able to choose where they make their works available. The art community could end up moving into a pay-per-play or subscription model like the one used in the film and music industries.

“The risk, of course, is that rights holders simply refuse to make their works available, which would undermine the very reason for extending fair use in the AI development space in the first place,” says Dennis.

In the US, LinkedIn lost a case in an appeals court, which ruled last spring that scraping publicly available data from sources on the internet is not a violation of the Computer Fraud and Abuse Act. Google also won a case against authors who objected to the company’s scraping their copyrighted works for Google Books.

Rutkowski says he doesn’t blame people who use his name as a prompt. For them, “it’s a cool experiment,” he says. “But for me and many other artists, it’s starting to look like a threat to our careers.”

by Melissa Heikkilä


rut01.pngrut02.pngrut03.pngrut04.pngrut05.png
 
An industry initiative called Content Authenticity Initiative, which includes the likes of Adobe, Nikon, and the New York Times, are developing an open standard that would create a sort of watermark on digital content to prove its authenticity. It could help fight disinformation as well as ensuring that digital creators get proper attribution.
Fuck all that.

The only way to fight this long-term is to not exist. Some will have success in the courts against big companies that are big targets, but amateurs will run with it, with nothing to take.

If you share your art online, the art and the style will be copied. If you do acting or voice acting, your appearance and voice can be copied to make a digital twin. The Day the Clown Cried will be completed with computer assistance. Your likeness will be inserted into pornography. Full-length novels in the style of Stephen King at different points in his career will be generated and thrown away in milliseconds.

Your creative job won't necessarily be made obsolete, but when people want Burger King quality entertainment, a computer will give it to them and self-insert them as a Mary Sue if they want.
 
Fuck all that.

The only way to fight this long-term is to not exist. Some will have success in the courts against big companies that are big targets, but amateurs will run with it, with nothing to take.

If you share your art online, the art and the style will be copied. If you do acting or voice acting, your appearance and voice can be copied to make a digital twin. The Day the Clown Cried will be completed with computer assistance. Your likeness will be inserted into pornography. Full-length novels in the style of Stephen King at different points in his career will be generated and thrown away in milliseconds.

Your creative job won't necessarily be made obsolete, but when people want Burger King quality entertainment, a computer will give it to them and self-insert them as a Mary Sue if they want.
I tried explaining this to people a decade ago. I told them that by 2040, it would be possible for someone to ask Siri to make them a video game from scratch, or tell them a story, or disgorge an animated movie for them, and it would be done within moments. "Information goods" like entertainment media would be made virtually free by their sheer abundance, completely eliminating the jobs of thousands of writers, artists, programmers, filmmakers, and so on. I argued that the destruction of people's incomes by automation would lead to an UBI and the servitization of things that used to be people's private property.


They went straight to denial, arguing that AI would never take creative jobs because they're too difficult. Everyone had this fantasy in their head that menial work would be automated away to the point where most people would while away the hours in art studios and makerspaces under Fully Automated Luxury Communism, like Star Trek.

In reality, the exact opposite is the case. It's more difficult to build a robot that has the dexterity and agility to work a skilled trade than it is to build a data center full of GPGPU blade racks running machine learning algorithms that reverse-engineer text and images to the point where they can generate them from scratch.

 
An industry initiative called Content Authenticity Initiative, which includes the likes of Adobe, Nikon, and the New York Times, are developing an open standard that would create a sort of watermark on digital content to prove its authenticity. It could help fight disinformation as well as ensuring that digital creators get proper attribution.
Gotta make sure whoever typed those 5 words into the prompt are duely rewarded.

Artists will soon be paid what they are worth. Zero. Do something productive you lazy bums.
 
Gotta make sure whoever typed those 5 words into the prompt are duely rewarded.

Artists will soon be paid what they are worth. Zero. Do something productive you lazy bums.
People don't get it. They think that the rise of automation and the "abolition of work" will allow them to lounge around doing nothing, absorbing the benefits of the system. It's far more likely that 90% of them will be exterminated under such an arrangement to allow for a higher living standard for the other 10%.

 
But these open-source programs are built by scraping images from the internet, often without permission and proper attribution to artists. As a result, they are raising tricky questions about ethics and copyright. And artists like Rutkowski have had enough.

you cannot fight it. you will try, but you will lose. the machine will overtake you. the weakness of your flesh betrays you
 
An industry initiative called Content Authenticity Initiative, which includes the likes of Adobe, Nikon, and the New York Times, are developing an open standard that would create a sort of watermark on digital content to prove its authenticity. It could help fight disinformation as well as ensuring that digital creators get proper attribution.
The best way to fight disinformation is to just not be a total retard, but then people would have to not be niggercattle anymore and we can't have that either.

I honestly don't see what the appeal is for these people and their sudden obsession with "the truth", unless the motivation is to set up a framework where the "approved parties" can lie more easily and more often through "official channels", where they can claim they're the only source of truth and anyone arguing with them is "disinformation". I'd be more worried if they weren't so fucking incompetent, but I'm sure there's some guys in New York who still think Hunter Biden's laptop and the pictures of him smoking crack while sexually abusing his niece are some sort of figment of the right wing imagination.

Anyone who has shown a sudden interest in "combatting misinformation" should be viewed with the greatest suspicion--when I was growing up, it was all "think for yourself" and "if Dan jumped off a bridge, would you?". Now they're basically one step away from sending the White House Press Secretary to assure the public of the safety and social responsibility of yeeting yourself of a bridge to save Democracy.

Funny to think this all started with some Dems accusing Trump of peddling "fake news" before he immediately turned it around on them and said "no, YOU'RE the fake news!" and forced them to go on a multi-year campaign to assure the public the news was telling them the truth lmao.

People have been so good at Photoshop for years now; AI image composition doesn't really pose any greater threat than that did.
 
Giant panda making a cheese sandwich:
cheesepanda.png

Where's my gallery exhibit? :mad:

Anyway, there's no stopping this now. You can't put the genie back in the bottle. It's a cool thing to mess around with. I think we're a long ways off from anything that would put artists out of work. Plus the program cannot create or imagine like a human mind can. It needs prompts and images to work with. There's no ability at spontaneous creation.

People don't get it. They think that the rise of automation and the "abolition of work" will allow them to lounge around doing nothing, absorbing the benefits of the system. It's far more likely that 90% of them will be exterminated under such an arrangement to allow for a higher living standard for the other 10%.


That's pretty much the plot of Phantasy Star 2. And if you know the plot twist you shouldn't even be surprised.
 
But these open-source programs are built by scraping images from the internet, often without permission and proper attribution to artists
Without proper attribution? In the same article that claims Greg Rutkowski's name has been used in 93,000 prompts they claim that these images are used without proper attribution? You need to know the artist's name for it to work.

How many people knew the name Greg Rutkowski before this? Does Greg get googled 93,000 times by name on an average month? No. People probably google "dnd fantasy art" or something equally vague and his art shows up, probably stolen on some other site without his name, and people see it and go "Yes that was some good Fantasy Art. I have no idea who made it nor do I care."

This is free publicity. At least he's still alive and can reap the benefits of this. It's not like Wizards of the Coast is going to replace him with the AI now. Like they see the AI art and tell Greg "You know you do great work, but this AI does it for free. We'll take the weird wizards with wonky fingers. No one will notice."
The people using stable diffusion sure as hell weren't going to pay thousands of dollars for one of his pieces.

This is all just a bunch of whining.
 
Without proper attribution? In the same article that claims Greg Rutkowski's name has been used in 93,000 prompts they claim that these images are used without proper attribution? You need to know the artist's name for it to work.

How many people knew the name Greg Rutkowski before this? Does Greg get googled 93,000 times by name on an average month? No. People probably google "dnd fantasy art" or something equally vague and his art shows up, probably stolen on some other site without his name, and people see it and go "Yes that was some good Fantasy Art. I have no idea who made it nor do I care."

This is free publicity. At least he's still alive and can reap the benefits of this. It's not like Wizards of the Coast is going to replace him with the AI now. Like they see the AI art and tell Greg "You know you do great work, but this AI does it for free. We'll take the weird wizards with wonky fingers. No one will notice."
The people using stable diffusion sure as hell weren't going to pay thousands of dollars for one of his pieces.

This is all just a bunch of whining.
What is the difference between an AI imitating a human, and a human imitating another human?

It is entirely possible for a human being to spend all day tracing Greg Rutkowski illustrations and studying his style until they can copy every aspect of it freehand, with no references or other aids.

Deep Learning AIs like GPT-3 and Stable Diffusion are doing exactly the same thing. When you train one of these AIs, nothing of the original data set remains. It is gone. Poof. Broken up and atomized into countless nodes with their own weights, like rudimentary neurons.


If people are going to complain about AIs learning from humans, then they may as well try and forbid people from learning things from other people. It's a futile, meaningless exercise.
 
Wonder if this is how portrait painters felt about photographs in the 19th century.
Well, people still get portraits, so *shrugs*

AI art to me is a joke, because it basically violates fundamental copyright. You're not actually teaching an AI how to make something from scratch. You're teaching it to take millions of images and put them together like a Frankenstein jigsaw puzzle. You can't make an AI do something from scratch and sorry, I don't buy the idea that 'JUST RAY TRACING'. You're taking copyrighted images, breaking them down, and using automated photoshop to push them into something coherent.

Sorry if I don't find that amazing or revolutionary. Wake me up when an AI can create something from scratch by just the fundamentals of art, not some bullshit that just amounts to image scraping and photoshop based on other people's work.
What is the difference between an AI imitating a human, and a human imitating another human?

It is entirely possible for a human being to spend all day tracing Greg Rutkowski illustrations and studying his style until they can copy every aspect of it freehand, with no references or other aids.

Deep Learning AIs like GPT-3 and Stable Diffusion are doing exactly the same thing. When you train one of these AIs, nothing of the original data set remains. It is gone. Poof. Broken up and atomized into countless nodes with their own weights, like rudimentary neurons.


If people are going to complain about AIs learning from humans, then they may as well try and forbid people from learning things from other people. It's a futile, meaningless exercise.
Yeah, and its called 'forgery'. The only benefit this serves is to corporations who don't want to pay artists anyway and will be another method of controlling content produced. This will not allow normal people to make anything, the best programs and networks will be paywalled behind tens of thousands of dollars with proprietary libraries.

AI will allow even worse entertainment to be made, specifically by committee and people who have no creative literacy at all. This is not a benefit to humanity in the least.

I think you're a retard if you love AI art and you think its the future. You're just surrendering more of humanity towards soulless corporate products. And this shit is completely soulless. Typing in a prompt doesn't make art. Its another step into making everyone drones for the wealthy.

Take away our ability to dream, to create, to do anything but consume their products and produce until you drop dead and another nameless, bodieless worker replaces you. The only dreams left will be by shitty AI creativity stolen from the past when you actually had to make something from nothing.

Techbros are the biggest faggots on Earth. However, it is useful for some factors, like if you want to make a game but can't draw, for RPG shit, or to get ideas for your own art. Besides that, call me cynical with this AI garbage. All this shit has ever been smoke and mirrors, looking for reasons to replace ANY human input for the benefit of profit.
 
Well, people still get portraits, so *shrugs*

AI art to me is a joke, because it basically violates fundamental copyright. You're not actually teaching an AI how to make something from scratch. You're teaching it to take millions of images and put them together like a Frankenstein jigsaw puzzle. You can't make an AI do something from scratch and sorry, I don't buy the idea that 'JUST RAY TRACING'. You're taking copyrighted images, breaking them down, and using automated photoshop to push them into something coherent.

Sorry if I don't find that amazing or revolutionary. Wake me up when an AI can create something from scratch by just the fundamentals of art, not some bullshit that just amounts to image scraping and photoshop based on other people's work.
On topic of fundamental copyright, considering how these "AIs"(APIs and algorithms that form something together to make a composite image) work couldn't a class action lawsuit just destroy them? I mean sure it wouldn't annihilate them entirely if it succeeded but it would still greatly limit their prominence since they couldn't access a wider pool of work anymore, lest they want to run the gauntlet of getting sued.
 
On topic of fundamental copyright, considering how these "AIs"(APIs and algorithms that form something together to make a composite image) work couldn't a class action lawsuit just destroy them? I mean sure it wouldn't annihilate them entirely if it succeeded but it would still greatly limit their prominence since they couldn't access a wider pool of work anymore, lest they want to run the gauntlet of getting sued.
No, genie's out of the bottle. It's available as open source and people can create their own models by feeding them whatever pictures they want to train the program with, and there are dozens of groups doing exactly that. Attempting to squash it will probably slow development, but it can't cut it off completely since it's loose in the hands of the public now.
 
Then he tried searching for his name to see if a piece he had worked on had been published. The online search brought back work that had his name attached to it but wasn’t his.

“It’s been just a month. What about in a year? I probably won’t be able to find my work out there because [the internet] will be flooded with AI art,” Rutkowski says. “That’s concerning.”
This must be true because whenever I search for van Gogh's Starry Night, I can only ever find the thousands of contemporary imitations, edits, and Etsy products. I believe the original, along with his other works, has been lost forever on the sea of the worldwide web.
For that matter, I also have no idea how SEGA has survived with sonichu being so prolific.
they help search engines rank the images as well. This also makes [Rutkowski's work] easy to scrape
Well it is certainly doing wonders for boosting his portfolio search engine results for ArtStation, Twitter, and Instagram.


You're not actually teaching an AI how to make something from scratch. ... You can't make an AI do something from scratch
I'm sorry but I can't help but laugh at this. The comment directly above yours even posted an ELI5 tier video breaking down exactly how the AI quite literally learns from scratch. It may not be same way human beings learn art, but then plenty of artists never learn the fundamentals anyway.
sorry, I don't buy the idea that 'JUST RAY TRACING'.
Thank you, I'm going to steal this and use it whenever I want to sound like a boomer. Too funny, well done. :drink:
the best programs and networks will be paywalled
Call me hopeful, but wasn't this already attempted with the Novel AI model? The demand seems sufficient enough for some hackers to disagree anyway.
And this shit is completely soulless. Typing in a prompt doesn't make art.
I love the "What is Art?" conversation. I agree though, a lot of it is soulless, but so is much of what has been considered Art prior to AI. If curation can be considered Art, then I believe this has a shot. If anything I think the Art world could do with a little disruption.
Take away our ability to dream, to create, to do anything but consume their products and produce until you drop dead
I would lean more towards your way of thinking if the means of creation hadn't just been ripped from limited privileged and corporate hands and disseminated to the filthy masses. At least now I will have a choice of more locally-sourced free range slop to consume.
 
If you don't want something stolen, don't put it on a medium that can be replicated perfectly. You can't control what others put, but you can control what you throw out there and try to protect yourself. For example, any code I write outside work is licensed under AGPL, which means in theory people have to abide by the four freedoms of software, but I don't have any pretense that it couldn't be used in violation of them.

Alternatively, Bukowski could enact the change he wants to see and learn to code. If the maths required to do AI in current year were difficult, there wouldn't be a literal continent of pajeets scamming their way into "ML engineering" positions.
 
Back