It's a fun toy to play with, but it's very limited in the sense that it only gives you something you've seen before, or the absolute most average expression of an idea.
We get that a lot from manmade works. Everything is borrowing from something.
Suno isn't creating songs that could fool you into thinking they are long lost potential hits, but given years of improvement, who knows?
So, in general? It's stupid and gay, man. I've yet to see a real conversation between both sides without their white knights yes-manning every opinion they spit out without a second thought.
I will support the side that doesn't care about copyright and muh ethics.
So, I'm curious. What do you think of AI art, writing, music, etc. and do you feel it'll make things better or worse? Your stances on it?
It will amplify the amount of work that can be done by one person, especially for complicated projects like video games, films, and animations. Unskilled people will push button, accept the first AI slop that comes out, and spam that on social media for likes. Skilled people will be able to create great things with it, but still spend hours or hundreds of hours tweaking everything. It could lead to an atrophy in skills for many artists, photographers, cinematographers, etc. but less of them will be in demand anyway.
If it ends up allowing "the little guy" to create entertainment that is competitive with corporate entertainment, that's a good thing. It would also be nice to hasten the death of the Hollywood celebrity. Not only could any living celeb's likeness and voice be stolen (violating their "personality rights" and potentially limiting distribution), but you can cast dead celebs or create a perfect mixture that can't be legally targeted by an estate.
If judges rule that AI companies don't have to respect IP laws, we'll see a sea change in how the internet is used, since anything you put online instantly becomes the property of AI companies to launder through their algorithms to generate content without paying you.
Everything is already being scraped up and exploited before such a ruling. If the laws/courts become less amenable to that in the West, particularly the US and EU, then it will happen overseas or companies will try harder to hide what goes into their models. Make sure your LLM is a black box that isn't reproducing full length NYT articles, don't tell anybody about the training data or lie about it.
That's as ridiculous as saying you're not allowed to look at a bunch of things someone else did and imitate their style. Style isn't copyrightable. If they actually stole the IP in question their ability to make fair use of it would be less than it is if they obtained it entirely legal. Essentially, you'd outlaw Google Image Search too and I don't think Google is going to put up with that. It is more or less an index of every single image within the reach of a robot which makes a recognizable copy of every such image.
That's the argument I would use, but if SCOTUS is convinced otherwise, they could throw a wrench into the works.
Companies like Adobe are already starting to make models based on data they have licensed, and others are training on only public domain works to avoid any copyright issues. In Adobe's case, users may have agreed to have their images licensed without realizing it would be used for AI. Oops, you got owned.
Adobe Stock creators aren’t happy with Firefly, the company’s ‘commercially safe’ gen AI tool
How Adobe’s bet on non-exploitative AI is paying off
They may win against a squadron of humans, but AI vs AI would likely become a neverending stalemate. Like Josh said in a recent stream, traditional warfare will probably come back in that event.
I wonder if Null cribbed his idea about nuclear war becoming obsolete and bringing back ground wars from the Enderverse or some other works of fiction. Can't have exciting wars between major powers in your fictional story with M.A.D., although Ukraine shows proxy wars are still allowed.
One nuclear-armed Poseidon torpedo could decimate a coastal city. Russia wants 30 of them.
Early reports of 100+ megatons were probably wrong, but good luck stopping a nuclear tsunami. There are a lot of creative ways to deliver nuclear devastation, and adversarial AI means that the detection AI could be working against camouflage AI, and over thousands of miles of borders and coastlines. The detection AI has to succeed every time to keep everyone safe, but the opposing AI only has to succeed once to deliver a bomb that could kill millions. So I'm not convinced that AI will lead to the end of nuclear war. AI might be a totally irrelevant technology to discuss if technology from crashed UFOs is being used to make nuclear ICBMs that can strike anywhere on the planet in under 2 minutes. But don't worry, it's just a psyop... right?