Deepfake bot on Telegram is violating women by forging nudes from regular pics - TLDR: t.me/DeepNudeChat_Bot

Seems everything AI does now is creepy or stupid. "Algorithm"-driven search results are infuriatingly unrelated. Articles written by AI seem to be just rambling drivel. Automated phone systems have creepy cheerful voices that keep saying "sorry, I didn't get that". Then there's the wonderful world of deepfakes.

Yet technophiles like Elon Musk and Zoltan Istvan think we're going to "merge with AI" or "upload our minds to the cloud"?

Not really a "now" thing, machines have always been shit at doing human things, that's why we have human jobs. It's just that all of a sudden, corporations think they're neat so they're deploying them en-masses to do human tasks, which they absolutely can't do, because the designers are morons.

AI is a novelty right now, it's kind of like a second pre-AI winter era. Everyone wants it, primarily corporations. It won't go away, it'll probably improve at an extremely slow and arduous pace. But it really is just a fad right now to shove it in any place they can stick it, so that's why you're so exposed to their rancid behavior. It's "good enough" now.

It's kind of like a repeat of the robot fad from the 80s/90s. Everything had to be a robot, people were enamored by them and talked about them all the time, and made an endless stream of the worst fucking pieces of shit you'd ever seen, like Nintendo's ROB. They were all just impractical, just hulking golems of metal and plastic that didn't actually do anything. Finally robots actually became practical, like Roomba, and now we don't really care about them anymore, now they're just a part of life, because we're on the edge of perfecting them. The same will happen with AI, right now everyone is extremely obsessed over it because it's a fringe technology, but not actually useful for anything. It's promising, but not practical. Once it becomes useful, it'll become invisible, just like how Roomba isn't a hulking bipedal robot, it's an invisible disk on the floor. The goal of all technology is to be invisible.
 
It seems like AI's hitting a point of diminishing returns, too. Siri launched over 9 years ago now, and it hasn't seemed to get any faster or smarter. And wasn't voice processing one of the first AI things to hit the consumer market? Not to mention, nobody likes YouTube recommendations, and everybody's had that situation where someone sends you one video of a topic you don't care about, and suddenly you're fighting off videos of that ilk for months.

It feels kind of like how, 20 years ago, there was so much clamoring about how great the graphics were on the Dreamcast and Playstation 2, and how photorealistic graphics were just a few short years away. Cut to today, and every game character's clothes still look considerably plastic, and their movement is still janky when not 100% scripted. Silent Hill 2 is older than a small chunk of the userbase here, and it already had characters with considerably recognizable facial expressions in-engine, that look better than The Last of Us Part II's cast. The resolution and post-processing effects have come a long way, but none of that makes up for good art direction.

And much like with video game characters, I'll be astounded if deep fakes can ever convincingly clone a human that isn't naturally stiff as a board. An expressive person who fidgets a lot will be hard to fake, as there's a lot of that CGI smoothness in all the movement that never looks natural. Even those deep fake voice generators don't sound natural for any sort of lengthy speech, as they need an enormous vocal library from a person, and have to fudge anything they can't just pull from their database.

Maybe things will change when quantum computers become widespread, whenever (if ever) that happens.
 
Maybe things will change when quantum computers become widespread, whenever (if ever) that happens.

Quantum computing has nothing to with AI. It's the model that matters. The dumbfucks making all the models are children straight out of college who think everything can be solved through convolutions. It's difficult to explain, but basically the field is incredibly stagnated, there's virtually zero innovation, it's just people constantly trying to make a shitty old algorithm work by constantly tweaking it. It's the same as any field in software, it's too difficult to find the rare geniuses who can actually design a new architecture. Easier to hire pajeets and college kids to rehash the same old tired convolutional algorithms, just add a few extra functions in between them, increase the training set, now it can spit out a React app. Which is not revolutionary, it's a gimmick. But people think it's impressive, so they can stay on the high of assumed innovation while they continue to do nothing but speculate "but WHAT IF we actually could make an AI that can actually do shit and HOW?"

The only silver lining is that Bayesian models are becoming more popular recently. Definitely nothing under the oppressive shadow of ANNs/CNNs, but they're far more promising for actual intelligent agents.
 
I kind of miss the robot craze.

I think it was more fun than the AI craze of Current Year.

Robots were fun because it was all speculation on how they can serve us.
AI is the exact opposite. How can we serve them? (By "them" I don't mean AI, I mean the communist corporations that employ them). When we talk about AI, it's more of the tone we take when talking about an epidemic rather than a useful tool.
And at the end of the day, it's all hilarious panicking because you've seen in this very thread the garbage output of these AI and how much people overreact to it.
Don't worry, by the time AI becomes a genuine threat to us (and again, by "AI" I mean human-controlled corporations using AI, I definitely don't mean SHODAN), it will be invisible, unnoticeable. More like the Patriot's AI than GLaDOS.
 
re: AI stagnating. I do not claim to be an expert at all, but I can offer my opinion as someone who has worked on AI applications for consumer products.

AI is very... very difficult to apply to real world problems and can often turn into a years-long vortex of trial and failure if you're not careful. We know this, so we timebox and check in on progress. On top of that, the results you get out of trying to make innovative AI or AI that attempts to be overly clever for it's intended purpose tends to yield inferior results (I would point to google's new search methods as an example of this).

Because of this, we often end up coming up with a MUCH simpler solution that gets the job done better than our "clever" AI can, and we just abandon whatever the hell we were doing before.
 
Robots were fun because it was all speculation on how they can serve us.
AI is the exact opposite. How can we serve them? (By "them" I don't mean AI, I mean the communist corporations that employ them). When we talk about AI, it's more of the tone we take when talking about an epidemic rather than a useful tool.
And at the end of the day, it's all hilarious panicking because you've seen in this very thread the garbage output of these AI and how much people overreact to it.
Don't worry, by the time AI becomes a genuine threat to us (and again, by "AI" I mean human-controlled corporations using AI, I definitely don't mean SHODAN), it will be invisible, unnoticeable. More like the Patriot's AI than GLaDOS.
Ding ding, current AI isn't even "how can we make NPCs behave like normal other players" or "making robots personable", it's all about data collection. This is why I'm thinking that we can never have "true" robots, so you can't have an android wife that has great sex and be able to talk about controversial topics without your name being added to a watchlist.
 
It's fun to see what kind of ghastly shit it spits out when you upload random images. I used these five:

View attachment 1689049View attachment 1689048View attachment 1689047View attachment 1689046View attachment 1689045

And here's what it came up with:







It worked the best on Hitler :heart-full:


My take: AI is very good right now at a very limited subset of tasks, like classifying if an image is a bird or a car or quickly translating something so you can get the rough idea out of it or generating close to human looking text. It is far from being general purpose and works out of the box. But we're slowly getting there. There's a lot of progress that's in journals.
 
Last edited:
  • Like
Reactions: Pissmaster
Photoshop has more realistic results and is used to rile people up more often (see: Trump's parents with the KKK), but it doesn't have more fearmongering articles about it than AI. The only real difference is that using a bot requires no skill or effort.
 
It might seem out of left field but I believe the main thrust of this story is an attack on Telegram rather than "AI Bad". Yes, the "self evident truths" of the boomer-truth regime will be reinforced in every propaganda piece - AI is bad, women are holy, consent is arbitrary, but all this is a finger pointed at someone, and that someone is telegram.
 
It's fun to see what kind of ghastly shit it spits out when you upload random images. I used these five:

View attachment 1689049View attachment 1689048View attachment 1689047View attachment 1689046View attachment 1689045

And here's what it came up with:





Ironically the Hitler one is probably the most convincing
 
Somehing I've been thinking about since I saw this last year; isn't this robot technically a good thing for the camwhores? Like if your family members or your boss or whoever see that you have nudes out online somewhere can't you just blame them all on the robot and sweep people's suspicion about under the rug? Someone would need to take an extra step to verify if the nudes are even real if this technology becomes more widespread.

I'm aware that at the moment the excuse of "oh no the robot did it" is probably not going to fly in most professional environments but all that technically means is that this tech just needs more exposure.
 
Somehing I've been thinking about since I saw this last year; isn't this robot technically a good thing for the camwhores? Like if your family members or your boss or whoever see that you have nudes out online somewhere can't you just blame them all on the robot and sweep people's suspicion about under the rug? Someone would need to take an extra step to verify if the nudes are even real if this technology becomes more widespread.

I'm aware that at the moment the excuse of "oh no the robot did it" is probably not going to fly in most professional environments but all that technically means is that this tech just needs more exposure.
I thought the same thing for celebrities and politicians that can use it to cover their asses when something leaks. "Nah that's not me and Epstein and Bill Cosby and Harvey Weinstein and Bill Clinton at Comet Ping-Pong pissing on a Belarusian girl, it's a deep fake. Oh no I'm experiencing invisible harassment and death threats on twitter from the alt-right so you know I'm telling the truth!"
 
Look at the stats on DeepNudeBot:

1623226106483.png

3886426376 / 114265 = 34,012 women per man? Holy shit. Is there some coomer out there on a mission to deepnude and crank one out to every single woman on the entire planet?

Anyway, here's another five, like before I used Deepnude.to. This set started out really well but kinda just fell apart at the end:

1 Cartman.png2 Enjoyer.jpg1623227645952.png4 Mlady.jpg5 Chris.jpg

6 Cartman.jpg

7 Enjoyer.jpg

8 Trump.jpg

9 Mlady.jpg

X Chris.jpg
 
Last edited:
Back