Deepfakes - When you no longer know what you're looking at

Hellbound Hellhound

kiwifarms.net
Joined
Apr 2, 2018
As each year passes, we are moving closer and closer to a world where it will no longer be possible to distinguish a genuine video or image from a fake one. Advances in computer generated imagery and machine intelligence capable of rendering photorealistic human image synthesis make it increasingly possible to create convincing doctored footage of people (both public figures as well as regular citizens) saying and doing things that they wouldn't otherwise. See this video for reference:

The implications of this technology are potentially enormous, ranging from concerns that unscrupulous figures could use it to spread dangerous misinformation and destroy reputations, that despotic governments and corrupt police officials could use it to plant evidence against people they don't like, that media networks and government agencies could use it to exert increasing control over what we understand about the world around us, to concerns that even the very knowledge of this technology's existence could cultivate a level of cynicism on the part of the public that would invariably lead to a far more unstable and fractured political landscape.

What I am really interested in is what other people have to say about this. Is this technology genuinely a threat to truth, transparency, and democracy, or is it merely going to be a tool used for memes and revenge porn?
 
Everything you are worried about is already happening through the MSM and social media. Deepfakes would likely reduce the effectiveness of this disinformation by being so blatant with it imo. <INSERT CELEBRITY NAME HERE> said what?!? It would make people more distrustful. It's interesting that at a time when trust in the media is very low, it seems people are still readily believing it if it is what they want to hear. Deepfakes is meme technology because nobody needs it to spread disinfo, you just need people to want to believe what you're saying.
 
Reminds me of this movie:
91mEmDpTolL._RI_SX300_.jpg

91mEmDpTolL._RI_SX300_.jpgp


Also the Running Man (the short story, not the movie) and that shitty Batman movie where Jim Carey played the Riddler.

It's a scary thing we're moving toward. I doubt decent fakes like the Obama one presented above (which you can really tell is fake if you watch the mouth) would fool anybody who isn't already fooled by satirical articles posted on Facebook or wherever else normie faggots go for news.
 
As deepfakes are getting more common there will be many models created that can validate how "real" a video is. These models will be trained for this purpose only and will be way better at spotting "remnants" and such that the deepfake-AI left behind than the human eye can spot.

With that said, I think technology like this will pretty much be impossible to evaluate in a couple of years. And when that time comes everything that can be faked will be considered fake until proven otherwise.
 
As deepfakes are getting more common there will be many models created that can validate how "real" a video is. These models will be trained for this purpose only and will be way better at spotting "remnants" and such that the deepfake-AI left behind than the human eye can spot.

With that said, I think technology like this will pretty much be impossible to evaluate in a couple of years. And when that time comes everything that can be faked will be considered fake until proven otherwise.

Oh please. Once people know that this type of video photoshop is out there, there will be people "spotting the pixels" so to speak. It just means that video isn't the high degree of evidence it used to be and that you're granny is going to be fooled by the next type of video rickroll.

If you look at that Obama video, it doesn't really sound like Obama and there's something off about his expressions as well.

When someone says something outlandish, they will claim they've never said it. People will look for artifacts in the video and might find some.
 
What I find scary is this being used as an argument to blindly follow 'trusted sources of information', as this video is trying to promote.

Fakes will get better, as will our (technological) ability to detect them. The real danger is the notion that 'true' information can only come from a select few sources. This is really what Buzzfeed is trying to promote with the video - the idea that you should only trust them, and others like them, to tell you the truth in a world of credible fakes.
 
Oh please. Once people know that this type of video photoshop is out there, there will be people "spotting the pixels" so to speak. It just means that video isn't the high degree of evidence it used to be and that you're granny is going to be fooled by the next type of video rickroll.

If you look at that Obama video, it doesn't really sound like Obama and there's something off about his expressions as well.

When someone says something outlandish, they will claim they've never said it. People will look for artifacts in the video and might find some.
You are 100% right, my friend.

But I think this might be a problem when you have AI that can create more than just facial features and janky speech. Think posture, hand movements, tics etc. It's in the details. There won't be AI walking amongst us like something out of a black mirror episode, but AI that interacts with us through digital mediums. And I think that can be a problem in some situations.

But to train a deepfake model you need ALOT of data of that person, and I mean ALOT. So it won't be a problem for people that doesn't have thousands and thousands of videos and pictures of themselves online. So you can basically only copy famous people and twitch thots for now.
 
Fakes will get better, as will our (technological) ability to detect them. The real danger is the notion that 'true' information can only come from a select few sources. This is really what Buzzfeed is trying to promote with the video - the idea that you should only trust them, and others like them, to tell you the truth in a world of credible fakes.

Much like the people that trust "politifact" or "snopes" for their factchecking and then the tech giants like google and facebook that leverage that into giving "factual information" and censoring their version of fake news if it disagrees, despite sometimes their own articles disagreeing with their conclusions.
 
Much like the people that trust "politifact" or "snopes" for their factchecking and then the tech giants like google and facebook that leverage that into giving "factual information" and censoring their version of fake news if it disagrees, despite sometimes their own articles disagreeing with their conclusions.

Pretty sure "factcheck" is entirely a marketing thing. "We aren't news, we're factcheckers".
 
I find the giveaway for deepfakes is the fake face seems to always be anchored to the nose and eyes of the model and does't change perspective properly when the model turns their head and may look floaty. I see the same problem with Hollywood anti-aging CGI but most people don't seem to notice.
 
In the mid to distant future this sort of thing will start wars in underdeveloped nations.

Developed nations will probably be able to put a lid on geopolitical fuckery of this fashion but they will still have to contend with a fringe that will not be dissuaded.
 
Just mentioning the possibility of something being fake, edited or otherwise, will bolster confidence in accusations and defenses.
With it being fresh in the mind they'll see their own brand of fake news everywhere.

Also, I remembered this bit from Starship Troopers 3 that's heavily related.
 
As deepfakes are getting more common there will be many models created that can validate how "real" a video is. These models will be trained for this purpose only and will be way better at spotting "remnants" and such that the deepfake-AI left behind than the human eye can spot.

With that said, I think technology like this will pretty much be impossible to evaluate in a couple of years. And when that time comes everything that can be faked will be considered fake until proven otherwise.

Stuff like this is already out there.

 
Even without that current technology, journos have done a fantastic job of pulling a wool over eyes of public for decades I might even say. I still remember when journos were sperging about how Syrian government used a gas attack on civilians even resorting to manipulating and editing the footage, I think.
 
Back