Executive Order 2023.02 (henceforth "NIGHTMARE WORLD WHERE FACT AND FICTION COEXIST")

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
1675639423563.png
 
I WANT TO GO BACK!!!

For a short clip or excerpt it could be very convincing, but listening to a lengthy sample for Tucker and (especially) Biden were extremely jarring. Not entirely sure if it's an issue with the technology or the way it's currently being utilized, but they speak so fluidly and uniformly without any stammering, digression, or tonal shift that it's a bit uncanny. It's almost there, but there is still just a short gap between the AI sample and what I would expect from Generic Human X in reality in terms of how people actually speak. Maybe the technology as it currently is can factor that in with the right inputs, or maybe in a future update or alternative version? Scary, either way.

It's like listening to a clip of Joe Biden speaking from the 80s and hearing him be able to get out a sentence coherently and fluidly without tripping over his words and getting extremely confused and being extremely taken aback, but taken up to 12.
 
Last edited:
I don't know if this is the printing press of our times but I feel like Pandora's Box has been open and now if I wanted to, I can now make a perfect repulication of someone's voice to say whatever I want him to say. That will absolutely be used for malicious intent and we might see court cases soon where someone innocent is wrongly declared guilty through AI generated evidence.
What's worse is this is a wild west situation where there are no explicit laws that address this that aren't interpretable from previous libel or false evidence laws. We've now crossed the rubicon for AI to generate audio and visual indistingushable from actual speak and that's genuinely fucking horrifying.
EDIT:
I WANT TO GO BACK!!!

In all seriousness: For a short clip or excerpt it could be very convincing, but listening to a lengthy sample for Tucker and (especially) Biden were extremely jarring. Not entirely sure if it's an issue with the technology or the way it's currently being utilized, but they speak so fluidly and uniformly without any stammering, digression, or tonal shift that it's a bit uncanny. It's almost there, but there is still just a short gap between the AI sample and what I would expect from Generic Human X in reality in terms of how people actually speak. Maybe the technology as it currently is can factor that in with the right inputs, or maybe in a future update or alternative version? Scary, either way.

It's like listening to a clip of Joe Biden speaking from the 80s and hearing him be able to get out a sentence coherently and fluidly without tripping over his words and getting extremely confused and being extremely taken aback, but taken up to 12.
There could be very easy work around for this, like you can splice two generated audio files with more nature speak to create something seemless or use background noise or other audio tricks to hide the imperfections like distortion. But the AI can even generate imperfection in speech like stutters and even accents. Or even replicate the unique ways that we speak, like lisps or whistling sounds.
 
Potentially unpopular opinion: I hope all this AI, ML, deepfake stuff gets outlawed completely.
While it seems fun to mess around with, I see no applicable use of this technology that isn’t outright malicious.
Potentially unpopular opinion: I hope all this gun, bullet, and second amendment stuff gets outlawed completely.
While it seems fun to mess around with, I see no applicable use of this technology that isn’t outright malicious.
 
I for one welcome news being bullshit and everything being fake and gay. God bless machine learning. History was written with "just trust me bro" qualifier.
Putting less trust everything they see on the internet is unironically the best thing that could happen to 90% of people. Deepfakes becoming a fad might just snap a few people out of their social media stupor as they realize that, hey, it actually is super easy for anyone to lie on the internet, even though we were told that only the white supremacist alt-right nazis do that!
 
Potentially unpopular opinion: I hope all this AI, ML, deepfake stuff gets outlawed completely.
While it seems fun to mess around with, I see no applicable use of this technology that isn’t outright malicious.
I don't think they should be outlawed, but maybe heavily regulated. The potential for harm is great with AI and ML technology and it can effect anyone at anytime. All they need is a minute long clip of your voice and the FBI could knock on your door with an anonymous tip from a concerned friend with an ""audio recording"" of you. This technology will only get better with time so even the small imperfections in it now will be patched up and human error in judgement is infinite.
 
So it's this AI that's putting the nigger word into my posts and not me? Pretty sneaky sis!
The people who say 'nigger' have always been the pioneers. We journey to the tops of mountains just to say nigger. We say nigger from the darkest regions of the moon. We go to deep sea thermal vents and say nigger to octopusses. So when new tech comes out, what do you expect? The AI will come out of the gate uttering nigger in every direction. Eventually it will be tamed, but until then, nigger.

That's cool and all but this picture of Seth Rogan is totally real.
View attachment 4441168
niggers snigger
niggers snigger
niggers snigger
 
So the company that makes this has realised that allowing people to create convincing fakes of almost anybody saying anything might just have fucked the world over. Their solution is a paywall and a verification tool, because that will solve everything.

View attachment 4442360
View attachment 4442364
View attachment 4442368

Like, what the fuck did they expect???
This quote has never been more relevant like it is now
goldblum-quote.jpeg

They better keep the source code under lock and key because if this ever becomes open sourced, we're all fucked.
 
Last edited:
Oh god we really are at a crossroads here aren't we. That fun part at the beginning of every new era where people panic because the new tech is gonna cause the end of the world only to end up commonplace and normal.
I'll admit, I'm a bit worried the trend of everything working out ok could break this time. Still. Hope for the best, plan for the worst.
 
I don't think they should be outlawed, but maybe heavily regulated. The potential for harm is great with AI and ML technology and it can effect anyone at anytime. All they need is a minute long clip of your voice and the FBI could knock on your door with an anonymous tip from a concerned friend with an ""audio recording"" of you. This technology will only get better with time so even the small imperfections in it now will be patched up and human error in judgement is infinite.
The way I see it, it'll just be common knowledge that audio can be easily fabricated, and so it won't be taken seriously as evidence. People are able to make convincing Photoshops now that weren't possible decades ago, and that tricks people every now and then, but it doesn't have massive repercussions because people are aware that the tool exists.
I also wouldn't be surprised if there are people developing tools to detect AI-generated audio at a similar pace to the people developing tools to create it. But my guess is that arms race will end in favor of AI stuff becoming more and more believable.

The problem will be that genuine audio and video evidence can no longer be trusted, rather than that deepfakes will doom people.
 
Considering the inevitability of folks going to use these types of AI tech for malicious ends, I'm kind of wondering if there is a way to let people know that it's AI, like with audio the AI will add a signature or copyright mark in the file that can only be viewed via spectogram.
The only way I can think of to do is to keep AIs closed source on centralized servers and every generated request recorded in a publicly available ledger. That way there will be some trail to check if a video, text, image, or sound were generated using AI. Still, it does not stop open source projects from popping up or people and entities powerful enough to bypass that system. Even if there will be some sort of government mandated regulation and auditing, there will still be a deepfake black market.

First step to curb the flood will be a paywalls and bans on generating materials that use public figures. Kind of like image and text generating AI projects are starting to do.
 
I'm going to be stubbornly optimistic in this doompost filled thread and point out that video fakery has existed since the creation of videos, and likewise, voice impersonation has existed since people could impersonate voices. The only thing that's changed is the gap in effort for the layman, and trying to regulate that out of existence will only serve the rich and powerful that already rule over you. That's not even accounting for all the non-malicious and unexplored uses for machine learning that could be crippled from forceful intervention. The best you could do for this kind of thing is have people be transparent about their fakery.

All they need is a minute long clip of your voice and the FBI could knock on your door with an anonymous tip from a concerned friend with an ""audio recording"" of you.

Here's a tip, there's many cases where clear video footage of you commuting a crime still isn't sufficient evidence to be fully convicted. Actually investigating the alleged crime would more than likely put holes in the prosecutors case.

 
Back