Executive Order 2023.02 (henceforth "NIGHTMARE WORLD WHERE FACT AND FICTION COEXIST")

like laws against murder, the idea would be to both mitigate it and create accountability for it. When Cain murdered Abel, the future was here and we couldn't put murder back in the bottle. Murder laws aren't there to eliminate all murder, just to limit it as much as possible. Same with future legislation against synthetic content, except the idea wouldn't be to limit its creation, just to limit the harm it can cause
im asking how the fuck you're planning to "regulate" an entire category of software
at most i could see governments criminalize the creation and distribution of deepfake audio/video material that mimicks real people without their explicit consent, but good fucking luck trying to enforce something like that against anonymous online shitposters. i don't see anybody going to jail for posting the vid of joe biden saying YWNBAW.
 
How would you even legislate this ?

It's only a matter of time before this can be done on a private system and you won't be able to track the origins of audio that originated like that without essentially upending the internet as it currently exists.
Yeah, people are constantly talking about legislation but I fail to see how this is ever going to be properly outlawed. You thought it was infeasible to outlaw material things like guns or drugs? Imagine if anybody could, with a couple button clicks and an online tutorial, replicate the machines used to manufacture those guns, than use standard household materials to generate more.

Look, I don’t like being a doomfag over new tech. The recent AI art seething made me hopeful and optimistic for the future of what AI can do. But this shit is genuinely terrifying. Elevenslab can do whatever they want to try and limit creation of malicious sound clips. It might work for a time. But once an open-source version of the tools they use gets out, we are completely fucked. For comparison, I have a build of Stable Diffusion downloaded to my PC. It took around 15 minutes to set up after following one of many online tutorials. Now, even if Stable Diffusion is outlawed, I can use it as much as I want with no restrictions unless the feds themselves come to my house and personally take a bat to my computer. The same outcome is possible with any of these other AI tools, it’s just a matter of waiting for them to release.

Also, when you have a clip of Tucker or Biden or some other notorious public figure ranting about trannies or niggers, of course that’s unlikely to end up in real trouble. But what about a random nobody? What’s stopping a social media stalker from grabbing your voice off of Facebook, generating a clip where you talk about doing pedoshit or cheating on your wife or whatever, and sending that to all your family members? What’s your response when they come to confront you about it? “That’s not real.” It’s your voice! We know of AI generated voice clips, but do your parents? Your grandparents? I don’t think we realize just how many people right now aren’t even aware of the existence of this technology. The best we can hope for is that it’s publicized enough to make people start actually being skeptical of voice recordings, and even then that relies on normies getting their heads out of their asses and questioning shit, which has historically proven to be completely futile.

I don’t know guys, I’d say this doesn’t look good. There’s a very large shadow being casted over the truth of the future, and I think we’re just going to have to accept that.
 
So the company that makes this has realised that allowing people to create convincing fakes of almost anybody saying anything might just have fucked the world over. Their solution is a paywall and a verification tool, because that will solve everything.

View attachment 4442360
View attachment 4442364
View attachment 4442368

Like, what the fuck did they expect???
You know, I saw an AI character creator that was released an while ago and I thought that their terms of service was unusually specific in the sense that they've banned "colorful" language, encouraging people to be aware of "unconscious bias," and some other stuff. Until someone who isn't enslaved by the industry creates their own caricature editor; it looks like the only thing that these AI products are capable of doing is acting like an politically correct chatbot
 
Considering the inevitability of folks going to use these types of AI tech for malicious ends, I'm kind of wondering if there is a way to let people know that it's AI, like with audio the AI will add a signature or copyright mark in the file that can only be viewed via spectogram.
Yeah, I was brainstorming ways that could make it identifiable, but it's just not easy. Some kind of code embedded in the file is a start, but with the Internet being as it is, someone will always find a way to tamper things like that.
 
People are so desperate for the sky to be falling :lit:
If you didn't know, this same retarded conversation occured when the public gained the ability to make fake images with image editing software. The burden of proof has been changed but nothing else is fundamentally.
If you're genuinely scared of this you are spiritually a woman
 
Maybe we will develop technology to tell if it is but right now, we don't really a defense for it as far as I know. I won't pretend to know anything about AI to comment or rather or not it's possible to create an "AI checker" program but I hope I'm wrong and we can have this great technology without the fear of it being used to falsely incriminate people.
I'm not deeply in-tune with the latest neural network nonsense, but I'll try to explain as best I can. As of the last time I read about it, I believe what they do is make a generator and, since these things require large amounts of labelled inputs, they can then feed the training input for the generator into an adversarial detector network, and the output from the same generator, which can be trivially labelled, to teach it to recognize what makes it different. However, eventually the difference can become noise, literally, and there will be no good way to do so. It's in the fact that the inputs and outputs aren't the real world that causes this. Seeing will be believing, and perhaps eventually not even that.
 
Also, when you have a clip of Tucker or Biden or some other notorious public figure ranting about trannies or niggers, of course that’s unlikely to end up in real trouble. But what about a random nobody? What’s stopping a social media stalker from grabbing your voice off of Facebook, generating a clip where you talk about doing pedoshit or cheating on your wife or whatever, and sending that to all your family members? What’s your response when they come to confront you about it? “That’s not real.” It’s your voice! We know of AI generated voice clips, but do your parents? Your grandparents? I don’t think we realize just how many people right now aren’t even aware of the existence of this technology. The best we can hope for is that it’s publicized enough to make people start actually being skeptical of voice recordings, and even then that relies on normies getting their heads out of their asses and questioning shit, which has historically proven to be completely futile.
the only solution is to be paranoid and privacy obsessed. avoid social media in general, avoid cameras and microphones that are trying to record you, do not post image/video/audio material to the internet.
 
Shit will get really serious once AI is able to generate videos from scratch by typing a prompt.

That already exists - I forget the software, and it's incredibly rough at the moment - but I was able to type a text prompt of "monkey with a gun" and it generated an animated GIF of precisely that. Obviously it had some serious flaws around how the hands looked, or the mechanical components of the gun - but it was still able to produce a rough yet accurate approximation of the text prompt.
 
  • Informative
Reactions: Aspiring Artist
Yeah, people are constantly talking about legislation but I fail to see how this is ever going to be properly outlawed. You thought it was infeasible to outlaw material things like guns or drugs? Imagine if anybody could, with a couple button clicks and an online tutorial, replicate the machines used to manufacture those guns, than use standard household materials to generate more.

Look, I don’t like being a doomfag over new tech. The recent AI art seething made me hopeful and optimistic for the future of what AI can do. But this shit is genuinely terrifying. Elevenslab can do whatever they want to try and limit creation of malicious sound clips. It might work for a time. But once an open-source version of the tools they use gets out, we are completely fucked. For comparison, I have a build of Stable Diffusion downloaded to my PC. It took around 15 minutes to set up after following one of many online tutorials. Now, even if Stable Diffusion is outlawed, I can use it as much as I want with no restrictions unless the feds themselves come to my house and personally take a bat to my computer. The same outcome is possible with any of these other AI tools, it’s just a matter of waiting for them to release.

Also, when you have a clip of Tucker or Biden or some other notorious public figure ranting about trannies or niggers, of course that’s unlikely to end up in real trouble. But what about a random nobody? What’s stopping a social media stalker from grabbing your voice off of Facebook, generating a clip where you talk about doing pedoshit or cheating on your wife or whatever, and sending that to all your family members? What’s your response when they come to confront you about it? “That’s not real.” It’s your voice! We know of AI generated voice clips, but do your parents? Your grandparents? I don’t think we realize just how many people right now aren’t even aware of the existence of this technology. The best we can hope for is that it’s publicized enough to make people start actually being skeptical of voice recordings, and even then that relies on normies getting their heads out of their asses and questioning shit, which has historically proven to be completely futile.

I don’t know guys, I’d say this doesn’t look good. There’s a very large shadow being casted over the truth of the future, and I think we’re just going to have to accept that.

You example has my gears spinning with the potential problem's this might cause in more intimate matters like personal vendettas. I can see someone trying to fuck over someone they know without having any kind of motive beyond harming them.

I'm glad I don't have any audio out there of me talking but realistically if someone you know wanted to do this the fake would only have to be as good as whatever shitty audio source it's being passed of as, a phone call recording would work in most cases.
 
The future is here and its fucking terrifying. Reading the OP and then listening to the deepfake of Null reading the OP was fucking surreal.
 
  • Like
Reactions: Instant_Pot_User
Shit will get really serious once AI is able to generate videos from scratch by typing a prompt.
It's not quite from scratch, but the Twitch show "Nothing, Forever" uses some premade renders of low-poly Seinfeld characters and locations and then I believe automatically takes all its dialogue and stage direction from GPT-3 to make a show that runs nonstop.
 
Gee this changes a lot of context, I don't think I would have believed the Yaniv missing clit audio in the age of common fakes. In the age of lies you just have to stick to your instincts I guess.
 
Nigger we ded.
.
#ML
 

Attachments

  • 609E2E0B-1493-4930-BBF7-55BA65C20A90.jpeg
    609E2E0B-1493-4930-BBF7-55BA65C20A90.jpeg
    201 KB · Views: 28
They are going to push for irl internet IDs that will not protect you for real, right?
That's another way to do it. I hope that tensions with China will drive social forces to oppose surveillance, kind of like they did during the cold war. West positioned itself as an opposition to Soviet block where invigilation and censorship were rampant. That ideological conflict put pressure on the U.S. and the rest of the west to uphold their ideals back then.
Real name internet policy was tried in South Korea beginning in 2007. It was struck down in 2012 due to all issues that it caused without curbing trolling and bullying as expected. I hope that the Korean example will serve as a deterrent too.
 
Last edited:
Back