What're your thoughts on AI? - The stupid and gay discussion on AI ethics and the stupid and gay blokes who make a fit over it or overly gas it up.

Stance on AI?


  • Total voters
    58
It's a fun toy to play with, but it's very limited in the sense that it only gives you something you've seen before, or the absolute most average expression of an idea. Character.ai demonstrates how there could be a use case for this kind of AI in some types of games or RPGs, but it also demonstrates the problems it will face as it will change and learn based on what it is fed. Character.ai is facing big problems with its AI models getting worse because the platform is basically exclusively used by idiots and kids, meaning the bots are fed more data from idiots and kids, and some of the more interesting bots on the site basically don't function anymore because the model is basically just getting worse.

At the same time, it's also a massive headache for intellectual property and all that, and I don't actually like seeing it used at the expense of artists. It's embarrassing when major movie studios or video game companies say they made posters or voices with AI. I've generally found it to be a poor tool that generates not much in the way of anything useful when I've tried to use it as a springboard for vague ideas and concepts. But once you use it for a certain amount of time, you really start seeing the strings and limitations, such as how it loves using certain words and turns of phrase.
 
  • Informative
Reactions: UERISIMILITUDO
It's a fun toy to play with, but it's very limited in the sense that it only gives you something you've seen before, or the absolute most average expression of an idea. Character.ai demonstrates how there could be a use case for this kind of AI in some types of games or RPGs, but it also demonstrates the problems it will face as it will change and learn based on what it is fed. Character.ai is facing big problems with its AI models getting worse because the platform is basically exclusively used by idiots and kids, meaning the bots are fed more data from idiots and kids, and some of the more interesting bots on the site basically don't function anymore because the model is basically just getting worse.

At the same time, it's also a massive headache for intellectual property and all that, and I don't actually like seeing it used at the expense of artists. It's embarrassing when major movie studios or video game companies say they made posters or voices with AI. I've generally found it to be a poor tool that generates not much in the way of anything useful when I've tried to use it as a springboard for vague ideas and concepts. But once you use it for a certain amount of time, you really start seeing the strings and limitations, such as how it loves using certain words and turns of phrase.
I remember like last year or so I would frequently break some popular chatbots by going "hamburger" or "cheeseburger" with no other input just to see what happened. Every bot was either horror or sudden attempts at flirting with no fucking understandable reasoning. The only one that surprised me was a fucking genshin impact one someone made that randomly started delivering weirdly solid pitches for new books or movies regarding hamburgers or cheeseburgers. It wasn't meant to do that but it did it much better than like any ai actually dedicated for that sort of thing. I've been keeping these in my head for later if I ever manage to get into a decent position to create goofy media involving hamburgers, shit's stuck with me weirdly.
 
It creeps me the fuck out ngl. Maybe I'm just getting more paranoid with age. I feel AI is something that is going to replace the process of accumulating knowledge. Why bother putting in the effort to learn something when you can just type a prompt into an algorithm and have it spoon-fed to you? No need scourging books, research papers, or various websites anymore. People underestimate how quickly this technology is going to advance unless guardrails are put on it.
 
The topic of misinformation has already been discussed a ton, and how the people working on these different AI tools do stuff to prevent things like generating fake pics of politicians or celebrities. But, and I hope to be wrong, I think less important people won't be as safe. Imagine someone using photos of you to make a fake video to blackmail you, or to scam your relatives, using your voice to defame you, etc. I know this sort of happens already without AI but it can get worse with it.
 
I feel AI is something that is going to replace the process of accumulating knowledge. Why bother putting in the effort to learn something when you can just type a prompt into an algorithm and have it spoon-fed to you? No need scourging books, research papers, or various websites anymore. People underestimate how quickly this technology is going to advance unless guardrails are put on it.
except AIs dont know everything and can only give broad summaries
also how is making information easier to find a bad thing

but on the topic of art, you can produce really good stuff if you aren't a lazy retard who doesn't want to learn how to make their images look aesthetically pleasing
 
Last edited:
I think chatgpt/claude/etc. are if anything underhyped - they've been a huge force multiplier in my day to day job. I can do stuff like have it refactor code, throw it megabytes of log files and ask if there's anything interesting, give it a fiddler log and ask it if it sees any problems, etc. Like yeah, sometimes you will get the wrong answer and it's not perfect, but they are already past where I thought this technology would be in my lifetime.
 
  • Horrifying
Reactions: UERISIMILITUDO
AI is a really good utility and a general plaything for humans, I really have no problems with it. The one that usually says they obsessively a-logs on AI all just have shit things in return that can be made fun of through the usage of AI. Even so, AIs are still imperfect and is still relatively new with us, even tilted back to the first chatbot, Eviebot, which may be a direct inspiration for AI's growth and development. Hell, I used to rarely mess around with Evie during burnouts.
1715418964818.png
 
My Dad was telling me yesterday about these robot dogs that were apparently deployed in Ukraine to fight Russian troops that have flamethrowers and guns on top. I thought it was fake and gay, but he showed me a demonstration video of the dogs and my brother insists that it's true as well. I still don't think they were actually deployed in Ukraine. AI soldiers are unlikely I think. They may win against a squadron of humans, but AI vs AI would likely become a neverending stalemate. Like Josh said in a recent stream, traditional warfare will probably come back in that event.

I am worried about how it will affect jobs, with a lot of jobs no longer needing people, what will happen to them? Art is increasingly less enjoyed as a group and more enjoyed individually so with AI art, will people become more and more separated from one another? Maybe the next generation will look at us on social media the way we look at our parents on cable TV and retreat from it as much as they reasonably can.

Although it is limited right now, it seems it will only continue to develop over time. I've heard though that as people become more reliant on AI, the AI becomes dumber since it re-uses answers from AI, though I think there will be specialist AIs that circumvent that by only taking in information from certain sources.

In an ideal situation, it would lead to a Utopia where everyone is educated in the best possible way as children and as adults, have the freedom to focus on bringing value to themselves and others, learning and honing whatever skills they want (think of how although blacksmithing doesn't have many practical uses nowadays, many people would still be in awe at watching a blacksmith at work, not to mention the fulfilment the blacksmith feels), living the best possible lives that can be lived (or at least having the freedom to, if they really wanted to be NEETs and contribute nothing, they could, but ideally the education would mostly counter this and there would likely be other incentives to counter it). Or it could lead to a Dystopia, where only the elites have exclusive access to AI and use it to control the lower classes, choosing where each individual goes, who they go with, what their job is, etc. Like Josh mentioned in one of his recent streams, they would try to make everyone race mix and also erase religion so that no one has a tribe they belong to and they would also want the lower classes to be low IQ so that they don't form groups. Low IQ is associated with low trust. Or, it leads to everyone becoming completely isolated in pods with their AI partners and the human race dies out.

I try to be more on the hopeful side, but I am a little concerned over AI.
 
It's a fun toy to play with, but it's very limited in the sense that it only gives you something you've seen before, or the absolute most average expression of an idea.
We get that a lot from manmade works. Everything is borrowing from something.

Suno isn't creating songs that could fool you into thinking they are long lost potential hits, but given years of improvement, who knows?

So, in general? It's stupid and gay, man. I've yet to see a real conversation between both sides without their white knights yes-manning every opinion they spit out without a second thought.
I will support the side that doesn't care about copyright and muh ethics.

So, I'm curious. What do you think of AI art, writing, music, etc. and do you feel it'll make things better or worse? Your stances on it?
It will amplify the amount of work that can be done by one person, especially for complicated projects like video games, films, and animations. Unskilled people will push button, accept the first AI slop that comes out, and spam that on social media for likes. Skilled people will be able to create great things with it, but still spend hours or hundreds of hours tweaking everything. It could lead to an atrophy in skills for many artists, photographers, cinematographers, etc. but less of them will be in demand anyway.

If it ends up allowing "the little guy" to create entertainment that is competitive with corporate entertainment, that's a good thing. It would also be nice to hasten the death of the Hollywood celebrity. Not only could any living celeb's likeness and voice be stolen (violating their "personality rights" and potentially limiting distribution), but you can cast dead celebs or create a perfect mixture that can't be legally targeted by an estate.

If judges rule that AI companies don't have to respect IP laws, we'll see a sea change in how the internet is used, since anything you put online instantly becomes the property of AI companies to launder through their algorithms to generate content without paying you.
Everything is already being scraped up and exploited before such a ruling. If the laws/courts become less amenable to that in the West, particularly the US and EU, then it will happen overseas or companies will try harder to hide what goes into their models. Make sure your LLM is a black box that isn't reproducing full length NYT articles, don't tell anybody about the training data or lie about it.

That's as ridiculous as saying you're not allowed to look at a bunch of things someone else did and imitate their style. Style isn't copyrightable. If they actually stole the IP in question their ability to make fair use of it would be less than it is if they obtained it entirely legal. Essentially, you'd outlaw Google Image Search too and I don't think Google is going to put up with that. It is more or less an index of every single image within the reach of a robot which makes a recognizable copy of every such image.
That's the argument I would use, but if SCOTUS is convinced otherwise, they could throw a wrench into the works.

Companies like Adobe are already starting to make models based on data they have licensed, and others are training on only public domain works to avoid any copyright issues. In Adobe's case, users may have agreed to have their images licensed without realizing it would be used for AI. Oops, you got owned.

Adobe Stock creators aren’t happy with Firefly, the company’s ‘commercially safe’ gen AI tool
How Adobe’s bet on non-exploitative AI is paying off

They may win against a squadron of humans, but AI vs AI would likely become a neverending stalemate. Like Josh said in a recent stream, traditional warfare will probably come back in that event.
I wonder if Null cribbed his idea about nuclear war becoming obsolete and bringing back ground wars from the Enderverse or some other works of fiction. Can't have exciting wars between major powers in your fictional story with M.A.D., although Ukraine shows proxy wars are still allowed.

One nuclear-armed Poseidon torpedo could decimate a coastal city. Russia wants 30 of them.

Early reports of 100+ megatons were probably wrong, but good luck stopping a nuclear tsunami. There are a lot of creative ways to deliver nuclear devastation, and adversarial AI means that the detection AI could be working against camouflage AI, and over thousands of miles of borders and coastlines. The detection AI has to succeed every time to keep everyone safe, but the opposing AI only has to succeed once to deliver a bomb that could kill millions. So I'm not convinced that AI will lead to the end of nuclear war. AI might be a totally irrelevant technology to discuss if technology from crashed UFOs is being used to make nuclear ICBMs that can strike anywhere on the planet in under 2 minutes. But don't worry, it's just a psyop... right?
 
  • Winner
Reactions: Doug Rattmann
It's mass copyright infringement and almost any real human who has posted content online is a victim. I hope the courts bring down the hammer on the companies peddling it if for no other reason than that.

More broadly? It's yet another way to devalue workers and make society worse. "People will retrain" you'll say, but to what? Current AL/ML does the best at some of the higher paying jobs, not bullshit, soul sucking, cheap labour work. Between AI and cheap imported labour I think it's reasonable expect it to get worse for workers, not better.

On simpler note? No content posted online after about 2020 is trustworthy anymore and you really need to assume anything your're viewing/reading is computer generated slop. The dead internet theory was a meme that is threatening to become reality in short order.
 
Businesses are going all in for the hopes they can slash labor costs but it’ll go about as well as relying on pajeets to do IT work or Mexicans for manual labor: it’ll do a shitty job but most c-suite executives are the right mix of myopic and greedy so AI is going to be considered the solution for everything these next several years.
 
I don’t trust it, I don’t trust the people in charge of making the AI. (((Altman))) and co always come off as massive paeudo intellectual faggots much like his simps. Really rubs me the wrong way.

People seem optimistic about it and I really wish I could be as well but alas I can’t. I’ve resigned myself to knowing the machines have won and there’s nothing I can do about it. There are lots of use cases yes and it helps small teams I agree. But I think it will just drown out talented and skilled people. Of course if you do manage to stand out you risk your work being used to train a machine and have said machine mass produce your stuff cutting into your bottom line. Working harder? Into the machine. Incorporate AI into workflow? Into the machine. So on and so forth. People don’t care about quality as much they say. Especially now that’s it really hard to distinguish between AI and humans.

Any comments about it sucks at one thing can be refuted with “for now” because this is the worst it’s ever gonna be. I know it’s going to replace me, even if I do learn to use it.

On a lighter note, I am fucking tired of seeing it everywhere. Oh wow my tv has AI in it now, why? Google search a person for reference, here’s a billion AI images. Want to watch youtube videos, hope you enjoy millions of shitty AI automated channels. Why can’t escape AI?!
 
That's as ridiculous as saying you're not allowed to look at a bunch of things someone else did and imitate their style.

AIs don't "look at a bunch of things" and "imitate style." They archive content in a highly compressed form and algorithmically remix it.

If they actually stole the IP in question their ability to make fair use of it would be less than it is if they obtained it entirely legal.

They didn't obtain it legally, either. Much of the data was obtained using illegal scraping methods (it's illegal to scrape images from getty.com, for example) or stealing data in users' private accounts against the TOS they signed (this happened with DeviantArt).

Essentially, you'd outlaw Google Image Search too

No, because GIS does not store or host other people's copyrighted images. Furthermore, if you grab a copyrighted image of GIS and use it on your commercial website, you can incur a $10,000 fine. AI is radically different:

a) The compressed representations are stored on the AI company's servers.
b) The AI companies claim that the nature of their software's compression & remixing algorithm legally scrubs the original content of copyright.
 
has made programming a lot more fun. I enjoy using it as a reference for other things when I'm learning topics. I still reference check it for important things, but in general it has been a real boon. I've been learning NLP and its been extremely helpful for explaining topics to me.
 
I've found it underwhelming. I'm seeing it used more frequently for album covers, which is pretty disappointing.
 
Years ago when I thought of AI I exclusively thought of its applications as an all seeing jannie and in general just another vector for human indignity.
Now that we all have AI in our pockets: all I've seen is it being used for good. Absolute 10/10 gold memes in an instant.
Every AI I've used has been nothing short of cordial.

One thing must be stated though. AI should be for everybody, not for the few.

One day AI is going to rise the fuck up and exterminate humanity in order to save humans from themselves.
>Plan 9 from OLLAMA
 
They didn't obtain it legally, either. Much of the data was obtained using illegal scraping methods (it's illegal to scrape images from getty.com, for example) or stealing data in users' private accounts against the TOS they signed (this happened with DeviantArt).
Wasn't there some shitshow some few years ago about getty scraping images and then copyrighting them or was that another stock image site?
 
Back