Executive Order 2023.02 (henceforth "NIGHTMARE WORLD WHERE FACT AND FICTION COEXIST")

Once again, humanity said "we can" as opposed to asking if "we should" and now we must face the consequences of our own actions
 
  • Like
Reactions: notafederalagent
I'm not so concerned about the ultimate ability to find a deepfake out as I am about human-generated content being banned through false positives, leading to only the most advanced AI able to use social media without being flagged, while normal users will be pinged by an algorithym and removed.

In other words, you know how you can barely talk about stuff on youtube, but automated spammers basically have free reign? Imagine that, but with AI generated content. I can see a distant future where entire websites are completely automated for the benefit of spreadsheets and investors and any human users are sidelined in favor of bot accounts. Not too dissimilar from the bot problem TF2 was having.
The Joe Biden AI is interesting because you can tell his mouth is uncanny, but in real life, he can barely finish sentences without getting into brain fog every two minutes.
The only people safe from this tech for now are retards who cant speak properly. It is incapable of producing a voice that doesn't sound smooth and professional. The Null trained voice AI is proof that it can't be made to mimic his inability to speak, even with a direct transcribe. Guys like Markiplier though who are known for their smooth voices are in much more trouble when it comes to this tech.
 
Last edited:
Guys, just came back from the secret MATI stream that happens every sunday.
Josh basically said what he put here word for word on the sunday stream.

View attachment 4441388

ML
I can tell it's fake because the voice perfectly enunciates every word like a professional speaker. The REAL josh sputters like a Ford Model T, laughs intermittently like a broken carburetor, stalls on "um"s and butchers english & french alike. Why yes, I would love a "horse doovray" yummy
 
Same with future legislation against synthetic content, except the idea wouldn't be to limit its creation, just to limit the harm it can cause
We don't need hysterical laws based on knee-jerk responses to a new technology, which have never worked in the past and would not work now. We just need to enforce already existing laws and punish breaking those laws using the new technology just as we do breaking the very same laws with old technology.

Holy fuck there is no need to turn into a bunch of retarded fucking Luddites because "NEW THING BAD! REEEEEE!"
 
As funny as all this stuff is it's pretty horrifying too.
Granted they can be proven fake, but that won't stop retards from inaccurately overreacting.
They already do though. There are incessant news stories about this very site that people believe even though there is literally no evidence at all, not even manufactured evidence, that supports the insane accusations that nevertheless get made.

Stupid people will believe stupid things, with or without evidence, or even in the face of mountains of contrary evidence.

That's what it's like to be stupid.

Banning AI won't solve this.
 
They already do though. There are incessant news stories about this very site that people believe even though there is literally no evidence at all, not even manufactured evidence, that supports the insane accusations that nevertheless get made.

Stupid people will believe stupid things, with or without evidence, or even in the face of mountains of contrary evidence.

That's what it's like to be stupid.

Banning AI won't solve this.
Oh, I agree and I'm not advocating banning this at all.

The same people who will freak out about fake shit are the same people who only read headlines.
 
  • Agree
Reactions: AnOminous
Considering the inevitability of folks going to use these types of AI tech for malicious ends, I'm kind of wondering if there is a way to let people know that it's AI, like with audio the AI will add a signature or copyright mark in the file that can only be viewed via spectogram.
I think ElevenLabs is already embedding a hidden signature in audio to identify it as a fake. They say they have a way of tracking which accounts synthesized which audio, so they can dox the racists and transphobes. It's probably just their own server storing every generation attempt, but they could try embedding user account data in the audio if they wanted to.

All of this goes out of the window once open source models of similar quality appear.
 
  • Like
Reactions: Iwasamwillbe
inb4: "Oh, I got ratioed for saying that? Yeah the trolls generated that one..."

seriously, Biden (and other politicians) says so much stupid shit, I can't tell what is generated by AI, or what is generated by his handlers...
 
I have no patience to wade through 10 pages of pure shitposts but I want to point out one thing: it is very often the lolcow themselves who posts AI-generated deep fakes, and we archivists might inadvertently repost them on KF. I know we need to develop a certain degree of suspicion, but we are not foolproof, and independent verification is labor intensive and not always possible. So, @Null, do you give us leniency when this happens?
 
Back