- Joined
- Mar 29, 2021
Preface
I by no means am a computer expert, but I am an enthusiast. That is why I wish to put my hypothesis out here for possibly more knowledgeable people to dispute or confirm the validity of my reasoning.I also just like this theory to be archived somewhere public.
It will not be like a Skynet or Matrix takeover. It doesn’t have to be that sophisticated. I believe it will be more like a Digital Black Plague.
The core of my argument rests on the premise that fundamentally deep learning AI is more efficient at resolving issues and optimizing methods in any strict logical framework irrelevant if a human is able to comprehend its methods or the framework itself.
One major set of frameworks that is not fully comprehensible by humans are computer systems in binary code or even at the level of the CPU transistor logic gates. Instead of programming everything in machine code, humans made higher level but, lower resolution coding languages, because we are not capable to comprehend binary on a level we need for our software.
However, an AI does not have this issue.
That means an AI could run computer processes that are totally incomprehensible and undetectable by any human.
It has never been like this and I believe our technological infrastructure is unprepared for something like this at its very core.
This will mean that the meta of cybersecurity will now involve an enormous part of computer systems that we are absolutely blind to. This intrinsically deeply benefits those who’s intent is to wreck systems with AI.
However, those who will want to protect systems with AI will be put in a position where they will just have to hope it doesn’t cause more problems than it solves.
There is malware right now that is capable of theoretically wrecking our technological infrastructure – Stuxnet and Nitro Zeus
A state sponsored arms-race for such malware is already there.
Here is what I think will be our civilization killer – “Digital Gray Goo” – some sort an AI powered malware that will act like a true biological virus. It’s main deep learning objectives will be to make itself as “un-deletable” as possible and to brute force onto any other system it has a network connection with. It will also try to use as much processing power as it can to master the system and will overwrite any disk-space available to store ever more complex generated solutions for it’s objectives.
Thus, the more powerful the hardware the more resilient and contagious the malware will be.
It will not be a “smart” program like Skynet, it will be dumb as a biological virus, but ever straightened through darwinistic principles when ever any anti-malware methods is used against it.
This will be like a digital WMD, but unlike Nuclear Weapons that are only held by 9 nations, different strains of AI powered malware like the one I described will be available to countless black-hats and it might just take one to release it at a critical internet intersection to take down internet as we know it.
*Edit
A simpler articulation is of my schizo theory is that developments in AI will lead to internet getting so pozzed with countless malware that our open internet infrastructure wont be able to function to a point of a general systems collapse.
Ether way, Ted still wins.
Last edited: