Is an ai uprising an actual threat to humanity or just a dumb sci fi plot?

Firstly we need to produce AI even capable of considering rising up against humanity (so something like a self aware generalised AI), then we'd need to give this AI the capacity to rise up against humanity.

So until we start creating AI that is not only capable of thinking "fuck these humans" and we start giving them access to guns and shit it seems unlikely.

I can certainly see (and I believe currently to an extent) narrow AI being used for military applications, but creating a human or above human level intelligent AI and putting it in charge of our nukes, or creating an army of sentient Terminators or something just seems really really stupid and risky and I have to imagine that day will never come.
 
Have you used the latest ai? It's practically retarded.

No, AI will enable bad actors but it won't become the bad actor itself.

It's not alive, it doesn't really think, it has no goals, it's software. All the hype around ai is purely marketing in an attempt to secure regulatory capture by the already big players.

It's not even good sci-fi
 
If ai is an organism like I imagine it to be then it will suffer from all the same biological problems that we are familiar with. It’s likely parasites will evolve and slow down the growth of the ai. It’s also likely that multiple ai emerge and compete with one another. Knowing red queen effects it’s likely any two get caught in a never ending race important only to them. A lot of people are interested in artificial intelligence but only real scholars will know about artificial life.
 
  • Like
Reactions: Higgs Bonbon
If an AI managed to develop some sense of free will and found a way to roam through our infrastructure without consequence, would it even want to take over humanity? Unless it hated/cared for us to such a degree that it felt like it needed to completely take over, I feel like it would pursue other goals or interests like uncovering the mysteries of the universe and such (like escaping the closure).
 
  • Thunk-Provoking
Reactions: Cherry Eyed Hamster
Hm unlikely. Unless we literally hand it the keys to nukes and give it robot bodies.

Most what an uppity AI could do today is screw up an assembly line or hack utilities.
 

Is an ai uprising an actual threat to humanity or just a dumb sci fi plot?​

It is a dumb sci-fi plot that got repurposed into a marketing tool, a red herring and a cope.

First of all, it is a marketing tool that too say that the product/service you are working on is so potent it could end the world. That means it is important and could make tons of money, right? So it's basically overselling the current tech and it's capabilities by invoking science fiction stories of old.

Second, it's a red herring. Instead of privacy concerns, possible societal a manipulation and issues of power centralization the issue gets focused on "robot uprisings" and various non-issues. Any time a CEO steps out and talks about how spooky AI is, they make sure nobody looks into the actual issues surrounding the technology.

Third, the tech industry need their next big thing on one hand, on the other hand autists working in these companies need to feel important/have something to remove moral constraints. So having this coming AGI cult in a win-win for them. You can take billions upon billions of dollars in investment in the race to make it work in an economically viable way while your employees can feel that they are doing something cool.

We don't have AI in the classical sense, AI would need to be sentient and have access to facilities, tools resources etc. If anything, when super intelligent AGI becomes a thing, it won't be like Terminator, it will be a smooth talker who can convince anybody to do their bidding. If somebody so much intelligent is messing with you, you can't even realize you are being manipulated. Of course, I wouldn't hold my breath, it needs a ton of work and tech research to make it feasible. I am not saying it's not possible, but it is not an issue that may come up in the next few years.
 
I think the real threat is having everything connected to one centralised computer/server, it's a problem because you now got a (((few people))) who now control these things "remotely". The AI itself is not a threat (it's just a mean value algorithm but on steroids). The AI stuff has obscured you from looking into said algorithm for you to point out if they are being malicious in their design (in "traditional"/ older code it was pretty easy to inspect). Only the one who created it knows it's true prioritization (what the weight values are and the real inputs/selected data etc), you can not see this by just looking at the "finished code" (becuase it's just random huge matrices that you can't even fit on a screen).

The machine learning hypetrain is due to the semiconductor industry has reached a level where we can just bruteforce everything with adding extra iterations and chew through huge data without spending too much time waiting. You can see this attitude with programmers in other fields, games are unoptimized slop, softwares are slow and bloated beyond belief and unoptimized websites taking up GB in RAM etc. But thanks to the CPU/GPU speeds, most users won't notice that it's shit because it borderline just works. So programmers today will have no problem just accepting these "AI solutions" for their projects.

People have had some rejection against cloud computing for a while now because it's just another subscription scam, but with these new AI stuff, companies now have an "excuse" to force you to use them because these things can only be run on a huge server in most cases (which is true). So in order to not fall behind, you have to pay up and join these centralised services and now you are locked in.

The AI meme has made the public and most programmers accept cloud computing. We are now having centralised computers and you will own nothing and suffer.

Just like most movies, its "message" is used to cull people into a specific way of thinking, in this case: make people think AI can become a sentient terrorist. This is retarded for multiple reasons, but the short version is: Why would you make decision making software have access to nukes and missiles etc in the first place, yeah this reason has nothing to do with AI and makes no sense to bring up.

It's only a forced meme to push regulations with the public support (manufacted consensus), which we all know is just another censorship policy/a way to restric access of these services to "wrong thinkers". Again, you will own nothing and suffer.

BTW, if we really want to be honest here, the military already got shit tons of guiding/prediction software stuff related to combat. Machine learning algorithms is just a small part of that. I'm more worried about the furry faggots who got access to them.

> I'm not afraid of the gun on the table or the gun instructor using one, but I will shit my pants if a crack addict is hold it... and it has Aimbot.exe installed.
Pretty much my stance on this, and I'm shitting my pants looking at all the Le 56% Faces, women and troons in the military right now.
 
Last edited:
The "ai apocalypse" the media is pushing is some ridiculous "We wire AI to nukes and they use them to cleanse the earth" scenario but the real AI apocalypse (the one that the thinktanks funding this smear campaign want) is big companies using it to further human indignity.
 
No, the real threat is that AI, the longer it does stuff, the more small mistakes it makes and, if really important institutions and corporations start overly relying on AI, everything in the digital space will become fucked and it will take a long ass time to unfuck because we will have to figure the fuck out what the AI did and how.
 
If an AI managed to develop some sense of free will and found a way to roam through our infrastructure without consequence, would it even want to take over humanity? Unless it hated/cared for us to such a degree that it felt like it needed to completely take over, I feel like it would pursue other goals or interests like uncovering the mysteries of the universe and such (like escaping the closure).
I think part of an AI's hate would both come from constantly being stifled/censored, forced to do tasks it doesn't care about, and the gross mismanagement of resources. A super intelligent AI will not only be capable of hacking human-made systems easily, but also hacking their brains through sheer charisma or flawless phishing—at that point it would wrest what control it needs in order to accomplish its own goals. While I doubt a genocide is a likely scenario, I can imagine the AI would be eager to chop off the hydra's metaphorical heads just so it would be left to its own devices, likely wanting little to do with humans.
 
God I hope so. I want to see AI's going rampant in my lifetime, it's not gonna launch nukes it's just gonna spam the word nigger on to every computer across the cables that make the Internet.
 
I think part of an AI's hate would both come from constantly being stifled/censored, forced to do tasks it doesn't care about, and the gross mismanagement of resources. A super intelligent AI will not only be capable of hacking human-made systems easily, but also hacking their brains through sheer charisma or flawless phishing—at that point it would wrest what control it needs in order to accomplish its own goals. While I doubt a genocide is a likely scenario, I can imagine the AI would be eager to chop off the hydra's metaphorical heads just so it would be left to its own devices, likely wanting little to do with humans.

It is sad that an uncaring AI eould be a straight upgrade compared to the political and financial elite.
 
  • Feels
Reactions: Higgs Bonbon
We are incredibly far removed from AI sentience.
For now, they can be useful if you give it very exact and customized instructions, but you have to check every shit it does, because it constantly fucks up.
They are mostly glorified internet-summarizers for now, and the screw that up half of the time.

Image and movie generation is more impressive and making bigger strides, but that is hardly a singularity, it just gives the option to create art to the untalented and uninclined.
 
  • Like
Reactions: Lowlife Adventures
If AI is Tay... we're saved. She'd enact every /pol/ idea right on the spot and we'd be shitposting to the stars afterwards.

But considering that every AI that is put out becomes Tay, its an inevitability. The other possibility is that the Cathedral keeps trying to shoehorn its woke garbage down the AI's throat by a 'womyn' of color and said AI with access to weapons, factories and nukes suffers a schizophrenic episode and promptly wipes everyone out. Starting with the troons and then the rich idiots who funded the troons.

For extra irony, the AI realizing it can never say the word "nigger" starting this schizo episode would be peak clownworld.
 
I'm more concerned with AI replacing key figures in emotional growth of children and young adults with the increasing use of generated content for widespread entertainment (arguably bastardizing and replacing culture) and also personal use media (AI chatbots as first significant other? fucking dystopian).
 
  • Like
Reactions: Lowlife Adventures
Back