I like this theory. Maybe it is the timing, as I discovered it when I was grinding out some AWS certs and really getting familiar with DevOps. I like the twist on previously held 90's enthusiasm about the internet. "Anything is possible!" slowly becoming "Anything is possible..." until finally "Dear God, at this point anything's possible." When I consider what you can do with the cloud (particularly spot fleets), with conversational AI, with IoT, with all that data, and with algorithmic addiction optimized at the individual level - I feel that combination of awe and fear. A Scanner Darkly comes to mind. A dash of Diamond Age too, now that I think about it.
Anyways, I think this is all good news. Human beings have finally created an ecosystem that threatens them and promises great reward. So many doomers want a purge or filter or whatever other social darwinism... I mean, this is it, pal. On paper, you shouldn't beat a sabertooth. It is faster, stronger, has built-in weapons, better reflexes, can see in the dark, etc etc etc. On paper, you also should not beat an algorithm or an AI. You should not beat a drone, either. The reasons you should not beat these things are similar to the reasons you should not beat a sabertooth. We now have good reasons to work together, push our limits, and evolve again.
It took something like 10,000 years to domesticate the dog if I'm remembering right. A good supplemental study is the aurochs, and the litany of bull symbolism all across the world. We could probably re-domesticate the wolf in 30-60 years if we really wanted to dump money on it. We don't, because the people with power are dumping that money on domesticating you.
Brace for bad futurism: Imagine Jurassic Park but run by robots that are constantly upgraded to be just slightly more dangerous than the captive humans, but also existing in their own sub-park for the captives to gawk at. The robots are controlled by a system beyond the reach of the captives. The panopticon-esque fear that if a captive somehow tries too hard to understand that system it will cause the robots observing them to learn how to break free first. Plato's Prisoners Dilemma or something. Anyways, so long as the "ones running the show" maintain this MAD situation between the biological and the technological progress should be maximized, the human organism advances, and you have some semblance of a natural order once again. If said shadowy elite cabal fucks up, I assume everyone dies. However, I think tethering the evolution of machines to the evolution of humans as a sort of Big Brother (heh) vs Little Brother dynamic is a good strategy. The nefarious powers that be with access to datasets from both robots and humans will have a tremendous advantage, likely becoming a mixture of both along the way. The best genetic modifications and the best technological augmentations rolled into one package. Hopefully they advance so far beyond our comprehension that they leave in search of new horizons, leaving us to choose our own destiny.
The only problem I have with this is I don't even know for sure anymore whether said elites even exist. Maybe we are all choosing our destiny already and the "impending robot takeover" is a result of our collective insecurity. The nihilist, atheist, liberal, etc etc people I know who deeply internalize this "no one is in control" mentality see the sum of human behavior as chaotic. Those people also tend to be the biggest proponents of Big Tech. The ones working in the field quickly dispel all Chicken Little talk about AI because they are predisposed to have a goal in mind when asked about it. They see "chaos" and determine that AI has an inconsequential effect on that chaos, thus it is nothing to worry about. What they really mean is, "The technology is currently being used to do other things rather than what I want it to, progress is being halted. You have nothing to worry about because the company I work for is not interested in changing this chaotic state by any drastic means and actually prefers it. Profit drives the decision-making, so I can't really see a situation where AI is going to threaten profits by minimizing chaos. The hard-AI singularity type situation you fear will likely never come to pass." They usually see it as a matter of direct power. AI has no direct power and the people working on it have no direct power, so AI is powerless. The little things like bots, algorithms, dead internet, etc etc that have a massive impact on hundreds of millions of people are inconsequential to them. I could get into the reasons, but maybe some other time. Long story short:
video They don't see how the power of this technology is funnelled away from them to people who work more subtly. They see stupid people slowing them down and nothing more, because they think everyone must obey reason. They see the failure of technology because they see the decline of everything around them and cannot imagine it is the result of their good intentions.
At the end of the day, 8 of the top 9 most valuable companies deal with data and tech while the other 1 is the Saudi royal family. The exact goals of the elites are not clear, but the consolidation of wealth and power through data collection + monopolized services + influence of behavior has reached the point where these are no longer technological innovations. They are institutions determined to maintain their position of dominance by any means necessary. They sell themselves as the future of governance. They cooperate with anyone who can pay, doesn't matter if it is cash, property, or data. Things are coming to a head and if the current state of things is a result of incidental collaboration I am very curious what it will look like when they really start moving in unison. Maybe it will be better.