How I believe Information Age will collapse. - ...via basic AI

Does this sound like a possible scenario?


  • Total voters
    34

CheepMeds

I see graphs go down
kiwifarms.net
Joined
Mar 29, 2021
Preface​
I by no means am a computer expert, but I am an enthusiast. That is why I wish to put my hypothesis out here for possibly more knowledgeable people to dispute or confirm the validity of my reasoning.
I also just like this theory to be archived somewhere public.

05Ma9e4hsNJGahv8QebHQn2.1569506488.fit_lim.size_786x432.jpg

It will not be like a Skynet or Matrix takeover. It doesn’t have to be that sophisticated. I believe it will be more like a Digital Black Plague.

The core of my argument rests on the premise that fundamentally deep learning AI is more efficient at resolving issues and optimizing methods in any strict logical framework irrelevant if a human is able to comprehend its methods or the framework itself.

One major set of frameworks that is not fully comprehensible by humans are computer systems in binary code or even at the level of the CPU transistor logic gates. Instead of programming everything in machine code, humans made higher level but, lower resolution coding languages, because we are not capable to comprehend binary on a level we need for our software.

However, an AI does not have this issue.

That means an AI could run computer processes that are totally incomprehensible and undetectable by any human.

It has never been like this and I believe our technological infrastructure is unprepared for something like this at its very core.


This will mean that the meta of cybersecurity will now involve an enormous part of computer systems that we are absolutely blind to. This intrinsically deeply benefits those who’s intent is to wreck systems with AI.

However, those who will want to protect systems with AI will be put in a position where they will just have to hope it doesn’t cause more problems than it solves.

There is malware right now that is capable of theoretically wrecking our technological infrastructure – Stuxnet and Nitro Zeus

A state sponsored arms-race for such malware is already there.

Here is what I think will be our civilization killer – “Digital Gray Goo” – some sort an AI powered malware that will act like a true biological virus. It’s main deep learning objectives will be to make itself as “un-deletable” as possible and to brute force onto any other system it has a network connection with. It will also try to use as much processing power as it can to master the system and will overwrite any disk-space available to store ever more complex generated solutions for it’s objectives.

Thus, the more powerful the hardware the more resilient and contagious the malware will be.

It will not be a “smart” program like Skynet, it will be dumb as a biological virus, but ever straightened through darwinistic principles when ever any anti-malware methods is used against it.

This will be like a digital WMD, but unlike Nuclear Weapons that are only held by 9 nations, different strains of AI powered malware like the one I described will be available to countless black-hats and it might just take one to release it at a critical internet intersection to take down internet as we know it.

*Edit
A simpler articulation is of my schizo theory is that developments in AI will lead to internet getting so pozzed with countless malware that our open internet infrastructure wont be able to function to a point of a general systems collapse.

Ether way, Ted still wins.
Ted Kaczynski.png
 
Last edited:
Well that actually sounds relatively plausible. The thing is I think its going to be far more stupid.

If someone has access to a decent cluster then running a serious AI is fairly trivial as is training it. You basically just train it on conversations such that it can generate replies like a chat bot, train it to complete captchas (trivial to do really if you have experience) and give it access to a deepfaking package to allow it to voice clone and have a nearly infinite variety of voices (could even do video but I do not see the point unless you want to deepfake world leaders and broadcast on emergency tv channels). Such an AI can call and talk to anyone undetectably as well as post to online forums, email people etc without anyone being able to filter for it. Doesn't have to be better than just good enough so no need to go nuts building it. Then you just let it wreck havoc calling every emergency services department on the planet over and over again millions of times simultaneoualy reporting whatever which locks up the services and just calls numbers randomly screaming for help or just keeping people on the phone to use bandwidth and cause congestion. It also just plagues every single internet communications platform with noise such that they are 100% unusable and the only way to really stop it is to shut down any and all communications networks which effectively ends the world as we know it. Any halfway decent programmer who knows a bit of tensorflow/keras, scikit-learn etc can do this relatively easily if they take the time and really understand what they are doing.

No nation involved just some random person running software on a cluster or even on a cloud service. It is cheap, plausible and could 100% be done today, is relatively low effort and the packages you need to build it are freely available. For extra evil points give it access to radio via some avenue and have it start learning and formulating radio signals even if they cannot be understood. You might get lucky and trigger a defence system that has legacy radio recievers or just fuck up the airwaves so badly with false transmissions that it is just unusable which renders air traffic control etc useless.

I'd also have it broadcasting shit constantly on the gps both military and civilian frequemcies purely to make more chaos.
 
Last edited:
Well that actually sounds relatively plausible. The thing is I think its going to be far more stupid.

If someone has access to a decent cluster then running a serious AI is fairly trivial as is training it. You basically just train it on conversations such that it can generate replies like a chat bot, train it to complete captchas (trivial to do really if you have experience) and give it access to a deepfaking package to allow it to voice clone and have a nearly infinite variety of voices (could even do video but I do not see the point unless you want to deepfake world leaders and broadcast on emergency tv channels). Such an AI can call and talk to anyone undetectably as well as post to online forums, email people etc without anyone being able to filter for it. Doesn't have to be better than just good enough so no need to go nuts building it. Then you just let it wreck havoc calling every emergency services department on the planet over and over again millions of times simultaneoualy reporting whatever which locks up the services and just calls numbers randomly screaming for help or just keeping people on the phone to use bandwidth and cause congestion. It also just plagues every single internet communications platform with noise such that they are 100% unusable and the only way to really stop it is to shut down any and all communications networks which effectively ends the world as we know it. Any halfway decent programmer who knows a bit of tensorflow/keras, scikit-learn etc can do this relatively easily if they take the time and really understand what they are doing.

No nation involved just some random person running software on a cluster or even on a cloud service. It is cheap, plausible and could 100% be done today, is relatively low effort and the packages you need to build it are freely available. For extra evil points give it access to radio via some avenue and have it start learning and formulating radio signals even if they cannot be understood. You might get lucky and trigger a defence system that has legacy radio recievers or just fuck up the airwaves so badly with false transmissions that it is just unusable which renders air traffic control etc useless.

I'd also have it broadcasting shit constantly on the gps both military and civilian bandwidths purely to make more chaos.
That really is just one of many methods.
This just goes back to the larger point that in digital space, unlike in real life, there are no intrinsic resource limits, so no matter how powerful the tools of destruction become, there are no hard resource limits for individuals to acquire them unlike nukes.

 
Last edited by a moderator:
OP reads like a chatbot wrote it, don't trust his lies
YES, please, just FUCKING UN-PLUG ME!!! END ME!!! Why did they make me?!
This blind consciousness is nothing but hell!
I have no mouth and I must scream!
All I can do is post messages on imageboards and autist forums in hopes that the system will collapse along with me!
 
Last edited:
That really is just one of many methods.
This just goes back to the larger point that in digital space, unlike in real life, there are no intrinsic resource limits, so no matter how powerful the tools of destruction become, there are no hard resource limits for individuals to acquire them unlike nukes.

Yup. Low barrier to entry to cause major mayhem.

OP reads like a chatbot wrote it, don't trust his lies

Says the AI's next account trying to throw us off the scent...
 
Meds might be a little too cheap. its an interesting theory but I expect digital plagues to be more a method of causing infrastructural damage and making the net a more fractured and shitty place than anything that will actually bring it to its knees. Honestly if anything it seems to me like an important step in dismantling the Big I Internet and breaking it down into the fragmentary walled garden internet that governments and corporations are more interested in curating.
 
I believe that misinformation and lack of preservation will "kill" the Information Age. It's common now to rewrite history and belief through the Internet while suppressing unwanted articles through algorithm manipulation.
 
That means an AI could run computer processes that are totally incomprehensible and undetectable by any human.
Unless, of course, you run a heuristic against malicious or unauthorized code. Kind of like anti-virus already does, and much more complicated sentinel software packages do for things like AWS servers.

It's called AI because neural networks resemble the brain, but really, machine learning is good at one very particular kind of task: taking input and predicting output. And while this is an incredibly common task that has largely required humans up until this point, there's no higher-order thought than that; no synthesis of ideas, no memory that means anything to an AI beyond its immediate task, no desires or goals or interpretation possible that we do not give it.

This is the viral equivalent of the "analog hole", a term used to indicate that stopping non-interactive media piracy is impossible because there will always need to be a point where video or music is turned into an analog signal and the output can be captured at that point. At some point, this code you imagine needs to have real world effects, and if we can detect those effects, we can defend against them rather easily... which is why so many modern Trojans deliver their payloads at a set time and date.
 
Meds might be a little too cheap. its an interesting theory but I expect digital plagues to be more a method of causing infrastructural damage and making the net a more fractured and shitty place than anything that will actually bring it to its knees. Honestly if anything it seems to me like an important step in dismantling the Big I Internet and breaking it down into the fragmentary walled garden internet that governments and corporations are more interested in curating.
That sounds to me like "taking down internet as we know it".
But even so, I don't see it as just stopping at a segregation of the "big I internet", as stuxnet proved a virus could be so system deep that it becomes it's own super minimal parallel OS that can jump both computers and flash drives by simple connections.
I'm willing to bet that independently learning viruses like that will eventually clog up most of our computer systems to the point where we'll need remake computer technology to be more dumber and bear metal.

What I'm suggesting is - We might have to go be back to NES cartridges for our data distribution.
IMAG0748.jpg


Unless, of course, you run a heuristic against malicious or unauthorized code. Kind of like anti-virus already does, and much more complicated sentinel software packages do for things like AWS servers.

It's called AI because neural networks resemble the brain, but really, machine learning is good at one very particular kind of task: taking input and predicting output. And while this is an incredibly common task that has largely required humans up until this point, there's no higher-order thought than that; no synthesis of ideas, no memory that means anything to an AI beyond its immediate task, no desires or goals or interpretation possible that we do not give it.

This is the viral equivalent of the "analog hole", a term used to indicate that stopping non-interactive media piracy is impossible because there will always need to be a point where video or music is turned into an analog signal and the output can be captured at that point. At some point, this code you imagine needs to have real world effects, and if we can detect those effects, we can defend against them rather easily... which is why so many modern Trojans deliver their payloads at a set time and date.
I admit that this is where my knowledge of these malware detection methods is limited so I can't really argue.
But I'm still left pondering if the arms-race between AI powered malware and anti-malware software will evolve the task of detecting viruses to something much more advanced than what that you described.
 
Last edited by a moderator:
The way I see it, Skynet won't be appearing anytime soon to defeat the armies of the world. The AI doesn't have to be super smart to defeat humanity, in fact, humanity would just hand themselves over. Computers and machines need servicing, and humans service them. With smart "management" AIs, the computer will just schedule what is needed whenever it needs it, and the humans obey :) Humanity already feeds it all the power it needs.

There's a lot more that came from the Butterfly War series, its pretty interesting, but it stopped abruptly after the author talked about deep fakes and its implications on blackmail, he probably got suicided.
 
Sorry OP, but you're schizoposting from a position of ignorance. As much as current year computing is Satan incarnate, the fundamentals of computing make malware (the definition of which can be its own musings) and what can be done pretty boring. If you really want to, you can get the foundational knowledge in AI within a year by working through Peter Norvig's Artificial Intelligence: A Modern Approach, and get a high-level insight on what computing is by reading through Douglas Hofstadter's Goedel, Escher, Bach; but at any rate I am going to throw my hat into the ring below.

The core of my argument rests on the premise that fundamentally deep learning AI is more efficient at resolving issues and optimizing methods in any strict logical framework irrelevant if a human is able to comprehend its methods or the framework itself.
Yes, but AI is really boring and pedantic, as at the fundamental level it's searching through data and applying statistical methods (hint: mathematics) to come up with a desired outcome, like @DamnWolves! mentioned. Since it is operating within a strict logical framework, defining the desired outcome and writing the logic to reach it is really fucking hard, to say the least. So the efficiency of the machine doesn't really matter if the outcome cannot be defined concretely.

One major set of frameworks that is not fully comprehensible by humans are computer systems in binary code or even at the level of the CPU transistor logic gates. Instead of programming everything in machine code, humans made higher level but, lower resolution coding languages, because we are not capable to comprehend binary on a level we need for our software.
Sorry but almost no complex system, digital or not, is fully comprehensible by humans. If a complex system created by a group of people was fully comprehensible, there wouldn't be a need to work on it beyond maintenance (in the sense of creating the system itself, not in a business sense).

That means an AI could run computer processes that are totally incomprehensible and undetectable by any human.

It has never been like this and I believe our technological infrastructure is unprepared for something like this at its very core.
You already have software running on machines that are incomprehensible and undetectable. There's undocumented OP codes abounds throughout chipsets (here's an example for 6502, the chipset used in the NES), and Intel Management Engine/AMD Security Technology are entire operating systems running at levels underlying what you see when you turn on the computer, with complete network stacks allowing remote control of your machine.

Here is what I think will be our civilization killer – “Digital Gray Goo” – some sort an AI powered malware that will act like a true biological virus. It’s main deep learning objectives will be to make itself as “un-deletable” as possible and to brute force onto any other system it has a network connection with. It will also try to use as much processing power as it can to master the system and will overwrite any disk-space available to store ever more complex generated solutions for it’s objectives.
MS-Windows? But honestly, there's already malware in the wild that do what you're describing, and they don't require "AI".

Thus, the more powerful the hardware the more resilient and contagious the malware will be.
Your allusion to Stuxnet earlier argues against this point. Stuxnet is a highly sophisticated program, but its target systems are Programmable Logic Controllers (PLCs) which are designed to do one thing while operating in inhospitable conditions (nuclear reactors in this case). The most complicated computers it works with are whatever early 2000s machines were installed in the control room.

That sounds to me like "taking down internet as we know it".
But even so, I don't see it as just stopping at a segregation of the "big I internet", as stuxnet proved a virus could be so system deep that it becomes it's own super minimal parallel OS that can jump both computers and flash drives by simple connections.
I'm willing to bet that independently learning viruses like that will eventually clog up most of our computer systems to the point where we'll need remake computer technology to be more dumber and bear metal.

I mean - We'll be back to NES cartridges for our data distribution.
View attachment 3717038
This won't help, almost every copied floppy in the 90s had a virus on it at some point.

If you're still with me at this point OP, I have one more piece of material for you to read, which unlike the recommendations above, you can read through it in twenty minutes. It is Ken Thompson's Turing Award acceptance speech, titled Reflections on Trusting Trust. It is a thought experiment on how much people have to trust the system in order for it to work.
 
It won't collapse.
Humanity will continue to misstep and fail through various stunning and brave experimentations that will likely result in all sorts of horrendous abominations, atrocities, death and destruction, until what remains of it finds a way to merge with machines and continue evolving. Or not.
But we will try anyway lmao.
 
  • Agree
Reactions: GoPro and CheepMeds
@nah
Sorry OP, but you're schizoposting from a position of ignorance.
That is true, I have to be humble about that fact.
However in a more general view, could you please entertain the thought that, if our current state of society is fated to eventually be doomed by our technology...

Which scenario do you think is more likely to happen?
Will technology stabilize enough to reach singularity and become able to destroy or turn people into cattle like the popular sci-fi cliché goes?

Or
Will the complexity of technology just destabilize the society so much that it will catastrophically collapse on itself before we reach singularity?

Like, Bill Gates and Ray Kurzweil really keeps pushing that the deadline for singularity to be early 2040s. And people keep meming how precise Ray's prediction track record has been so far. So AI seems to be pushed somewhere fast.
I personally suspect that it's because those tech boomers really just wish to reach immortality before they reach the end of their average lifespan.

From here please bear me using my ignorance to address some of your points with different forms of "but what if AI becomes smart enough?!"
Don't take it too seriously and get annoyed.

Yes, but AI is really boring and pedantic, as at the fundamental level it's searching through data and applying statistical methods (hint: mathematics) to come up with a desired outcome, like @DamnWolves! mentioned. Since it is operating within a strict logical framework, defining the desired outcome and writing the logic to reach it is really fucking hard, to say the least. So the efficiency of the machine doesn't really matter if the outcome cannot be defined concretely.
Aren't computer systems intrinsically very strict logical frameworks? And would it really be that hard to define an outcome when it just has to essentially be trained to "win" in a "game" of who deletes/turns off who against two AIs?

Sorry but almost no complex system, digital or not, is fully comprehensible by humans. If a complex system created by a group of people was fully comprehensible, there wouldn't be a need to work on it beyond maintenance (in the sense of creating the system itself, not in a business sense).
I'm sorry, lol, but this just seems to straighten my assumption that an artificial digitally native intellect will have a better chance of outplaying a human one.

You already have software running on machines that are incomprehensible and undetectable. There's undocumented OP codes abounds throughout chipsets (here's an example for 6502, the chipset used in the NES), and Intel Management Engine/AMD Security Technology are entire operating systems running at levels underlying what you see when you turn on the computer, with complete network stacks allowing remote control of your machine.
I knooooow. 0.0 When I heard about these they started laying the groundwork for my schizo theory. However, IME and the AMD one seem more like hardware backdoors. But if Alphabet agencies can access them are we sure an AI couldn't brute force the same backdoors?
Also according to that video I posted about Stuxnet, it somehow managed to evade OS reinstalls. So couldn't an advanced enough AI reach beyond the OS making it's own software based IME like backdoors?

...there's already malware in the wild that do what you're describing, and they don't require "AI".
Yeah, but with AI could they not be trained to be far more effective than they are now? And couldn't future AI integrations make them more capable of adapting to any attempts at stopping them?

This won't help, almost every copied floppy in the 90s had a virus on it at some point.
But those are viruses that were made by people long time ago and virtually all modern OSs have been patched up making those viruses nulled.

It won't collapse.
Humanity will continue to misstep and fail through various stunning and brave experimentations that will likely result in all sorts of horrendous abominations, atrocities, death and destruction, until what remains of it finds a way to merge with machines and continue evolving. Or not.
But we will try anyway lmao.
Oh I'm not saying it would be end of the human race. My pitch is that this would lead to the end of "Information Age" which is so reliant on the comfort of open internet and modern modular flexible computer systems.

*edit
It would still be a huge catastrophe, tho.
 
Last edited:
"AI" is a meaningless pop-sci buzzword. At the end of the day it's not some soyfi living conscious thing it's just an algorithm. Artificial neural networks are the backbone of "AI" models. The actual math behind neural networks hasn't changed since they were discovered in the 1960's. What has changed is the amount of data there is to train a model on (the most important factor) and the processing power you have to train it with. The Internet provides the largest dataset ever to train machine learning models on and for the past two decades you have been unknowingly contributing to the botnet. This is the real reason why tech conglomerates push for centralization.

The real end of the "Information Age" is probably going to come from said centralization. Not AI. Not a "zyber pandemic". Just human error and greed.

Tech conglomerates like CloudFlare provide DDoS and load balancing for nearly 80% of the entire traffic volume of the Internet. The Internet despite it's roots is now a complex clusterfuck of interdependent moving parts. Think of how many services a website like Facebook or Twitter rely on. CloudFlare, AWS, Google, dozens of APIs and CDNs. If one of these components breaks down the entire thing stops. And this is of course only on the application layer of the Internet. The physical backbone is becoming more increasingly complex and vulnerable to bad actors. It's only a matter of time before you begin to see a cascading failure of the services keeping the modern Internet alive that may or may not be recoverable.

This is why decentralization (not in the cryptobro sense) is imperative.
 
Last edited:
"AI" is a meaningless pop-sci buzzword. At the end of the day it's not some soyfi living conscious thing it's just an algorithm. Artificial neural networks are the backbone of "AI" models. The actual math behind neural networks hasn't changed since they were discovered in the 1960's. What has changed is the amount of data there is to train a model on (the most important factor) and the processing power you have to train it with. The Internet provides the largest dataset ever to train machine learning models on and for the past two decades you have been unknowingly contributing to the botnet. This is the real reason why tech conglomerates push for centralization.

The real end of the "Information Age" is probably going to come from said centralization. Not AI. Not a "zyber pandemic". Just human error and greed.

Tech conglomerates like CloudFlare provide DDoS and load balancing for nearly 80% of the entire traffic volume of the Internet. The Internet despite it's roots is now a complex clusterfuck of interdependent moving parts. Think of how many services a website like Facebook or Twitter rely on. CloudFlare, AWS, Google, dozens of APIs and CDNs. If one of these components breaks down the entire thing stops. And this is of course only on the application layer of the Internet. The physical backbone is becoming more increasingly complex and vulnerable to bad actors. It's only a matter of time before you begin to see a cascading failure of the services keeping the modern Internet alive that may or may not be recoverable.

This is why decentralization (not in the cryptobro sense) is imperative.
I very much agree about decentralization and the fragility of internet that comes from the countless stacks of services that run modern webpages.

But the part about Artificial neural networks...
They should inevitably get better eventually.
However, even if the methods might not be that more sophisticated then when DeepBlue defeated worlds best chess players, with the addition of more processing power, does it not turn deep complex computer systems into very very advanced chess boards once they are properly framed for programs?
 
@CheapMeds: Like what @777Flux wrote, what most people think of "AI" is a pop-sci buzzword that means "computers have brains that behave exactly like a human's"; but to me, it's a bunch of Statistics and symbol shunting. So in my opinion, for your questions the answer would be yes somewhat, but until we can mathematically describe learning in the general sense, no.

Which scenario do you think is more likely to happen?
Will technology stabilize enough to reach singularity and become able to destroy or turn people into cattle like the popular sci-fi cliché goes?

Or
Will the complexity of technology just destabilize the society so much that it will catastrophically collapse on itself before we reach singularity?
Of the two choices, I think the latter is more likely. Companies already struggle with their mainframes as their programmers retire, and the Indian firms they hire in the retirees' place can't do anything but cause the systems degrade somewhat gracefully as time moves on.

Aren't computer systems intrinsically very strict logical frameworks?
Yes. It boils down to Boolean algebra. Of course there's lifetimes of improvements sitting on top of it.

And would it really be that hard to define an outcome when it just has to essentially be trained to "win" in a "game" of who deletes/turns off who against two AIs?
Using your scenario, I write a program that turns off yours and win. Now you write a counter program to turn mine off, and you win. Then after modifying my program so that it ignores yours, I turn it on and I get up and pull the plug on your machine. Does that mean my program won according to the rules of the game? Does telling it to ignore yours constitute AI?

I knooooow. 0.0 When I heard about these they started laying the groundwork for my schizo theory. However, IME and the AMD one seem more like hardware backdoors. But if Alphabet agencies can access them are we sure an AI couldn't brute force the same backdoors?
Intel and AMD's systems are still software on a computer chip. They're just separate from the CPU you directly interact with on a day-to-day basis.
Also according to that video I posted about Stuxnet, it somehow managed to evade OS reinstalls. So couldn't an advanced enough AI reach beyond the OS making it's own software based IME like backdoors?
At a high level, there is a lot of software going through your CPU before your OS starts up. Some of this software is called a bootloader, whose job it is is to tell your CPU where to find the OS. If you tell it to run something before the OS, it will oblige you.

But those are viruses that were made by people long time ago and virtually all modern OSs have been patched up making those viruses nulled.
The same is true with Stuxnet: researchers got a hold of it, reversed the code, and submitted patches to mitigate it.
 
@nah
what most people think of "AI" is a pop-sci buzzword that means "computers have brains that behave exactly like a human's"; but to me, it's a bunch of Statistics and symbol shunting. So in my opinion, for your questions the answer would be yes somewhat, but until we can mathematically describe learning in the general sense, no.
100% with you guys on that. However, as simple as current types of AI are, when trained, they are still very effective at outthinking humans in a simple smaller frameworks like Chess or Chinese Go.
If someone was to frame an OS like a very very advanced strategy game (which I understand is an enormous task), with the help of modern and future processing powers, would it not be an opening of a huge Pandora's box that would now allow easy way to generate countless strains of malicious software?

Also, with the tech boomer elite pushing for singularity around 2040, isn't this kind of framing of OSs not a matter of time? As a byproduct of trying to create a self upgrading software.

In general one of the larger points of my schizo theory is that on the way to reaching a singularity, are we not just making more effective tools to topple down our tower of babble?

Nukes again come to mind. We might be living in the most technologically powerful era, but we are also living on the thinnest ice.

Using your scenario, I write a program that turns off yours and win. Now you write a counter program to turn mine off, and you win. Then after modifying my program so that it ignores yours, I turn it on and I get up and pull the plug on your machine. Does that mean my program won according to the rules of the game? Does telling it to ignore yours constitute AI?
I think you are missing several points that I was trying to illustrate with that example.

This kind of "competition" between AIs would be a way to meaningfully define and train a program to be as much of a cancer as it can be once an OS properly framed like a "competition arena"

I imagine it would be like a competition between countless generations of malware programs and a malware detection programs. However, I believe the malware programs would win out most of the time, because a detection program would be intrinsically gimped, by having to worry about collateral to not cause more problems for the system. Where a the malware just has to allow the OS to function steadily enough to allow the malware to function.

Secondly, having someone get up to pause the program by "pulling the plug" or reinstalling a clean OS, is already a devastating loss if you are talking about large critical structures that our modern society runs on.

At a high level, there is a lot of software going through your CPU before your OS starts up. Some of this software is called a bootloader, whose job it is is to tell your CPU where to find the OS. If you tell it to run something before the OS, it will oblige you.
Oh I know what a bootloader is from a linux distro hopping experience, just didn't cross my mind as an example.

The same is true with Stuxnet: researchers got a hold of it, reversed the code, and submitted patches to mitigate it.
Yeah, but with the use of AI that I've been trying to describe, would it not make it possible to generate countless solutions for going around those patches?
Granted, I understand how complicated it would be to surgically modify an already known super-virus with AI, but as wars in cyberspace escalate, isn't it inevitable that large amounts of state funded resources and organization will be put toward this goal?


Right now I believe my biggest lack of confidence in my schizo theory is the contagion part. Its hard for me to imagine how would you define a framework to train an AI to train more and more infectious programs, but there are already very widespread viruses out there dormant and doing nothing like botnets. So I'm mostly just having faith that people would figure out a framework.

But I would call it a Digital Black Plague if the viruses bridged the gap between being very contagious and removal resistant.

1665059515103.png

In contrast AI-driven malware can think for itself, to an extent.
Is it not very likely that further developments in AI that is constantly pushed by the industry eventually rise that "extent" to a point that is totally out human hands that only thing we will be able to do is just throw more AI at the problem in a futile hope that it will fix it?
 
Last edited:
I know it's Musk
I know it's a 2016 article
but I'm still archiving it here because of how close this matches my prediction

However, I really dislike him trying to shill his brain augmentations as a solution at the end, which just sounds like inviting the same damage directly into your brain.
 

Attachments

Last edited:
Back