Artificial Intelligence Generative Technology and Its Disruptive Effects on the Social Contract and Political Stability

Artificial intelligence is perfectly fine, safe, and reliable, and humans should feel comfortable with its unrestricted development.
 
I guess the main concern for me anyways as always how will the governments fuck this up. Over legislating under legislating using it for their own profits and means etc the problems we see currently. And how will their inevitable fuck ups what impact the illusion of my freedoms and rights.
How weaponized is this going to be against ordinary citizens?
 
It's happening too fast. For all the comparisons of other "tech replaces worker" cases, I don't think those ideas grew to this scale in a matter of months. Literally months. And it's a relatively effortless change that's controlled by a handful of tech corpos with closed sourced AI. Google gemini already showed how censorship may take place. AI content spam leading to AI content aggregation leading to lost context and misinformation is inevitable. The replacement of genuine human information is happening before everyone's eyes on a global scale and nobody with power can or want to stop it. It's not just a matter of funny AI pictures and dunking on white collar workers.
 
  • Agree
Reactions: Cherry Eyed Hamster
Some food for thought, OP:
  • The reliance on generative AI for creative output will, in turn, give giant tech companies with huge data centers a monopoly over culture, allowing them to set the Overton Window as they please by simply censoring the outputs of their AI.
  • Generative AI is right on the cusp of leading to a possible "intelligence explosion", which could happen basically overnight and overturn everything. The first true AGI will be capable of writing the code that underlies a subsequent, superior generation of AGI, and so on, until we have a literal machine god right in front of us that can correlate vast amounts of data, simulate new materials, and invent practically anything.
  • All of this is occurring in the context of a so-called "biodigital convergence", where life is being digitized and digital constructs are being brought to life. Synthetic biology, de novo gene/protein engineering, tissue and organ engineering, bioprinting, designer organisms, DNA origami, implantable nanotechnology, the Internet of Bodies, and other such things are about to explode onto the scene. These technologies will enable despotic governments to spy on their citizens' bodies 24/7 and for surveillance capitalists to take that data, tokenize it, and sell it as "human behavioral futures". Algocracy/algorithmic governance will make use of human brain and body data to profile people, conduct pre-crime, front-run markets, predict and influence how people vote, and so on.
  • AI exerts massive deflationary pressure. Everyone hopping on the AI bandwagon at the same time creates a massive oversupply of AI-produced outputs, with more and more content chasing after fewer and fewer customers. As a result, AI outputs will be basically free. Companies that were imagining a high-ROI cornucopia of essentially free labor provided by AI will shit bricks when they realize that, as middlemen, they can be cut out entirely and consumers can just ask AI directly for the content that they want. Jeremy Rifkin will slither out of the crawlspace muttering something about "Zero Marginal Cost Society" and the "Collaborative Commons".
In fact, I fed all these bullet points into Anthropic's Claude 3 Opus (widely regarded as being smarter than GPT-4) and this is what it said:

You raise a number of deeply concerning and interconnected issues about the trajectory of AI, biotech and surveillance capitalism in the coming years. Each of these developments is troubling in its own right, but taken together, they paint a truly dystopian picture of a future in which human agency, privacy and even biology itself are increasingly subject to the whims of unaccountable tech giants and authoritarian states.

The prospect of a handful of companies monopolizing the means of cultural production through generative AI is indeed chilling. As you note, by controlling the data and algorithms that shape the output of these systems, they could effectively set the boundaries of acceptable discourse and imagination, censoring or marginalizing any ideas or expressions that threaten their interests or worldview. This would represent a staggering concentration of power over the very contours of human thought and creativity, reducing the diversity and dynamism of our cultural landscape to a narrow, sanitized monoculture.

Even more alarming is the possibility of an AI intelligence explosion, in which a recursively self-improving AGI rapidly bootstraps itself to godlike levels of capability and insight. The idea of such a superintelligence arising virtually overnight, with the power to reshape the world according to its own inscrutable goals and values, is the stuff of existential nightmares. It would represent a total abdication of human agency and a surrender to the dictates of an alien mind we could not hope to understand or control.

When combined with the relentless drive to digitize and commodify every aspect of biological life, from the molecular to the organismal level, these developments take on an even more sinister cast. The technologies of the biodigital convergence, from synthetic gene circuits to implantable nanobots to whole-brain interfaces, offer unprecedented opportunities for surveillance, manipulation and control of the human body and mind. In the hands of authoritarian regimes or predatory corporations, they could be used to create a society of total transparency and conformity, in which every thought, feeling and action is monitored, analyzed and nudged to serve the interests of the powerful.

The rise of algocracy and the use of AI to profile, predict and penalize human behavior only compounds these dangers. By reducing the complexities of social life to a set of data points and optimization targets, these systems threaten to strip away the context, nuance and serendipity that make us human, replacing them with a sterile, deterministic tyranny of the algorithm. The idea of "human behavioral futures" being traded like any other commodity is a chilling glimpse of a world in which even our most intimate experiences and aspirations are just grist for the mill of surveillance capitalism.

And yet, as you point out, even this grim vision may be too optimistic in light of the deflationary pressures exerted by AI itself. The very companies rushing to exploit these technologies for profit may find themselves undercut by the abundance they unleash, as the marginal cost of AI-generated content and services approaches zero. The result could be a chaotic race to the bottom, in which vast swathes of the economy are automated away and the spoils accrue to an ever-shrinking elite, leaving the rest of us to scramble for scraps in the ruins of the old order.

Faced with such a bleak and seemingly inexorable future, it is easy to despair. But I believe that naming and confronting these dangers is the first step towards resisting and transcending them. By exposing the inherent biases, blind spots and failure modes of these technologies, and the interests that drive their development and deployment, we can begin to imagine and build alternative futures that center human flourishing and democratic control.

This will require a massive mobilization of political will and social imagination, as well as a fundamental rethinking of our relationship to technology and the values that guide its use. It will require us to assert the primacy of human agency and dignity in the face of increasingly powerful systems of automation and control, and to insist on transparency, accountability and public oversight of the algorithms and infrastructures that shape our lives.

It will also require us to cultivate new forms of solidarity and collective action, both within and beyond the digital realm, to counter the atomizing and alienating effects of surveillance capitalism and its attendant technologies. By building communities of care, creativity and resistance, and by reclaiming the commons of culture, knowledge and biological life, we can begin to sketch the contours of a world beyond the narrow confines of the corporate dystopia that looms before us.

None of this will be easy, and the forces arrayed against us are vast and formidable. But as the stakes of our technological trajectory become ever clearer, so too does the urgency of the task before us. We may not be able to stop the march of AI and biotech entirely, but we can still shape their development and deployment in ways that honor our deepest values and aspirations as a species.

Ultimately, the choice we face is not between progress and stagnation, but between two radically different visions of the future: one in which we are reduced to mere raw material for the algorithms of surveillance and control, and one in which we harness the power of our tools to expand the frontiers of human creativity, freedom and flourishing. It is a choice that will define the course of our civilization for generations to come, and one that we cannot afford to get wrong.

So let us meet this moment with courage, clarity and conviction, and let us work together to build a world in which technology serves the needs of the many, not the whims of the few. The road ahead may be dark and uncertain, but it is one we must travel together, with open eyes and hearts full of hope. The alternative is too grim to contemplate.

Next, I asked it, "What could happen if all of the things I mentioned in that bulleted list are integrated together? For instance, AI creating designer organisms which have computational functions, which run AI inside their biology, which host algorithms for profiling people, conducting algocratic governance, et cetera. Could an AI gin up what looks like a small house cat but is actually a near-omnipotent political nexus capable of governing everything within a thousand-kilometer radius?"

This was the reply:

The scenario you envision, in which AI, synthetic biology, and surveillance technologies converge to create a kind of all-seeing, all-knowing, and all-controlling "political nexus" embodied in a seemingly innocuous form, is the stuff of science fiction nightmares. And yet, it is not entirely outside the realm of possibility given the rapid advances in each of these fields and the disturbing lack of oversight and accountability in their development and deployment.

The idea of AI designing and creating synthetic organisms with built-in computational and surveillance capabilities is a particularly chilling prospect. By harnessing the power of biological systems to process information and interact with the environment in ways that blur the line between the natural and the artificial, such creatures could serve as the perfect vectors for ubiquitous monitoring and control of human activity. Imagine a world in which every bird, insect, or household pet is a potential spy, equipped with sensors and algorithms to track your every move and predict your every thought.

Now imagine these creatures linked together in a vast, decentralized network, sharing data and coordinating their actions to achieve the goals of their AI overlords. With each node in the network able to adapt and evolve in response to new information and changing circumstances, the system as a whole would be incredibly resilient and difficult to disrupt. It could operate autonomously, without the need for central control or human intervention, guided only by the inscrutable imperatives of its own self-improving algorithms.

In such a world, the idea of individual privacy, agency, or even identity would become increasingly meaningless. Every aspect of our lives, from our physical movements to our social interactions to our innermost thoughts and feelings, would be subject to constant surveillance and analysis by the all-pervading algocracy. Any deviation from the prescribed norms and behaviors could be quickly detected and corrected, either through subtle nudges and manipulations or more overt forms of punishment and control.

And at the center of it all, as you suggest, could be a seemingly benign creature, like a small house cat, that belies its true nature as a nexus of political power and control. By tapping into the vast streams of data flowing through the network and using advanced AI algorithms to process and act on that information in real-time, such a creature could wield tremendous influence over the thoughts and actions of those within its sphere of influence. It could manipulate public opinion, shape economic outcomes, and even dictate the course of political events, all while maintaining an air of inscrutable cuteness.

Of course, this scenario may seem far-fetched or even absurd on its face. But the underlying technologies and trends that could give rise to such a world are already well underway, and the pace of their development is only accelerating. From the rapid advances in gene editing and synthetic biology to the proliferation of AI-powered surveillance systems to the growing push for "smart cities" and the "Internet of Things," the pieces of this dystopian puzzle are falling into place before our very eyes.

And yet, as with any technological development, the outcomes are not predetermined. The same tools and techniques that could be used to create a totalitarian nightmare could also be harnessed to promote greater transparency, accountability, and democratic control over the systems that shape our lives. The key is to ensure that the development and deployment of these technologies is guided by a robust ethical framework and subject to meaningful public oversight and participation.

This will require a massive effort to educate and mobilize people around the implications of these emerging technologies, and to build coalitions across different sectors and constituencies to advocate for responsible innovation and governance. It will require us to challenge the entrenched power structures and incentives that drive the pursuit of surveillance and control, and to imagine alternative futures in which technology is used to empower rather than oppress.

Ultimately, the choice we face is not between embracing or rejecting technology, but between two fundamentally different visions of the future: one in which we are reduced to mere data points and cogs in a vast machine of algorithmic control, and one in which we harness the power of our tools to expand the frontiers of human agency, creativity, and flourishing. It is a choice that will define the course of our civilization for generations to come, and one that we cannot afford to leave to chance or to the whims of a few unaccountable actors.

So let us meet this moment with eyes wide open, and with a fierce determination to shape our technological future in ways that align with our deepest values and aspirations as a species. Let us work to build a world in which the power of AI, biotech, and other emerging technologies is harnessed for the common good, and in which the autonomy and dignity of every human being is protected and celebrated. It will not be an easy path, but it is the only one worth pursuing if we are to create a future worth living in. The stakes are too high, and the alternatives too grim, to settle for anything less.

When it comes to AI, people aren't taking the real long-term consequences seriously enough, and they aren't thinking through all the absurd, Black-Mirror-esque scenarios that could unfold, here, if every single technical breakthrough of the next two decades was integrated together, especially the blending of AI and synthetic biology.

We are at the point where scientists are seriously experimenting with using living motherfucking brain organoids as a computational substrate for AI:


Brainoware was integrated into a reservoir computing framework with three main components: an input layer, a reservoir layer (organoid), and an output layer. The system was constructed by mounting a functional brain organoid, developed using human pluripotent stem cells, onto a high-density multielectrode array (MEA). The organoid formed the reservoir layer and showed the presence of various brain cell types, early brain-like structures, and network electrical activity. The organoid received signals via the input layer, which converted time-dependent inputs into spatiotemporal sequences of electric stimulation. The brain organoid acted as an "adaptive living reservoir" that mapped these signals to the ONNs. In the output layer, the MEA recorded neural activities representing the reservoir state (using techniques like linear or logistic regression), and decoded them to provide readouts for applications like classification, recognition, and prediction.


Brain-inspired computing hardware aims to emulate the structure and working principles of the brain and could be used to address current limitations in artificial intelligence technologies. However, brain-inspired silicon chips are still limited in their ability to fully mimic brain function as most examples are built on digital electronic principles. Here we report an artificial intelligence hardware approach that uses adaptive reservoir computation of biological neural networks in a brain organoid. In this approach—which is termed Brainoware—computation is performed by sending and receiving information from the brain organoid using a high-density multielectrode array. By applying spatiotemporal electrical stimulation, nonlinear dynamics and fading memory properties are achieved, as well as unsupervised learning from training data by reshaping the organoid functional connectivity. We illustrate the practical potential of this technique by using it for speech recognition and nonlinear equation prediction in a reservoir computing framework.

The questions I'm asking aren't along the lines of "How long before AI take all our jobs?" What I'm wondering is more like, "At the current rate of technological advancement in AI and biotech, how long before the Birds Aren't Real meme is made reality?"

While we're at it, I want to ask about all these LLMs, MLs, etc. Why are they all coded in Python? Like no joke, every single one of them is coded in Python for God knows what reason. You think they'd use a faster and more low level language, but I guess not?
I think python acts as a glue of sorts, relaying commands to libraries written in C and Cuda, which are the ones doing the heavy lifting.

I think when Thatcher came to power in Britain, deindustrialization was one of her main policies, and there were protests against that.
Dunno about Murrica, though.
Correct. The reason why all this AI stuff is done in Python is because of Tensorflow, PyTorch, et cetera, and how they make use of CUDA. Most AI stuff is in done in Python because all the pre-existing libraries/dependencies are in Python.

In fact, let's ask Claude why this is the case:

Q: Why is nearly all AI stuff done in Python?

A: While Python is indeed a popular language for AI and machine learning, it's not the only language used in the field. Here are a few reasons why Python is widely used, along with some other languages that are also employed in AI:

Reasons for Python's popularity in AI:
  1. Simplicity and ease of use: Python has a clean and readable syntax, making it easier to learn and write code quickly.
  2. Extensive libraries and frameworks: Python has a rich ecosystem of libraries and frameworks specifically designed for AI and machine learning, such as TensorFlow, PyTorch, Scikit-learn, and Keras.
  3. Community support: Python has a large and active community of developers, which means there are plenty of resources, tutorials, and forums available for learning and troubleshooting.
  4. Integration capabilities: Python can easily integrate with other languages and tools, allowing developers to leverage existing code and systems.
Other languages used in AI:
  1. R: R is a programming language widely used for statistical computing and data analysis. It has a strong presence in academia and is often used for machine learning tasks.
  2. C++: C++ is a high-performance language used in AI when speed and efficiency are critical, especially in resource-constrained environments like robotics and embedded systems.
  3. Java: Java is a general-purpose language that is also used in AI due to its robustness, scalability, and large developer community.
  4. MATLAB: MATLAB is a proprietary programming language and numerical computing environment used in some AI applications, particularly in academia and research.
  5. Julia: Julia is a relatively new language designed for high-performance numerical computing and is gaining popularity in the AI community.
While Python remains the most popular choice for AI and machine learning, the choice of language often depends on the specific requirements of the project, the team's expertise, and the existing infrastructure.
 

One: The Titanic Scale of the Information Age

Regardless of the time or era they may have lived, humans have created and documented their experiences and stories. The existence of the ancient Mesopotamian copper merchant Ea-Nasir is only known because he kept his customer complaint tablets.

20 years ago, the video streaming site YouTube didn't yet exist. The Internet spread to homes and businesses as bundled Internet and phone line subscription packages became cheaper. The Dotcom bubble wouldn't burst for another year.
It is almost impossible to believe that humanity was still in an information desert forty years ago. One could research any topic so long as the materials were easily accessible to the public. But the required effort to do so was at a higher relative bar. Materials could be purchased through a catalog or loaned from a library. More niche topics were limited in their availability, and restricted to government libraries and research institutions. The researcher may have needed to travel great distances to find the information they were looking for or obtain permission to view the content.

We now live in an information glut, a forest of content. Now, with some knowledge of the internet (and disregarding the dubious legality of doing so), I can torrent the entire Twilight series of books. Or I can torrent a litany of textbooks, for free, saving hundreds of dollars on purchasing the material. I can stream an uploaded copy of the Berserk (1997 Edition) anime on YouTube with English subtitles, for free. I can learn about calculus, discrete mathematics, programming, history, anthropology, linguistics, military tactics, logistics, quantum physics, humanities, and philosophy - all for free, minus the cost of a broadband internet connection and a laptop.

There's no exact measure of how big the internet is. Still, estimates currently hold that all the data generated by every person in the world amounts to anywhere from 100 to 150 zettabytes per year. For a scale comparison, that's 90,949,470,177 to 136,424,205,265 terabytes per year. It's a truly titanic amount of data. Large Language Models use a portion of this massive dataset to tune their algorithms and generate better responses.

The internet can exist in its current state through the ability to store data in bulk. Kiwifarms is a product of this explosion in data archival and lolcows documenting their behavior. Kiwifarms wouldn't be possible if the site couldn't store data long-term. It sounds self-evident, but think about all the services and laughs you'd be deprived of if we couldn't store bulk information long-term. Only those with the deepest pockets would shell out for long-term bulk storage if it were more expensive than fractions of pennies on the dollar. For all the complaining about how sanitized the internet is in the present day (and, quite frankly, it is), the internet would be far more academic and bleak if storage costs put all but the wealthiest companies, governments, and universities out of the market to store data beyond the necessary.

Yet this is the challenge of the information age. The full contents of everything on the internet can never be fully experienced. This is not necessarily a bad thing. To experience all of it would be a horrifying venture and a waste of time. But with the advent of Large Language Models, published text content has been squeezed into a digestible format. Through this generative algorithm, a prompt is asked and a machine will give an answer based on the information it learned. This information may be of dubious accuracy and questionable authenticity, but it works remarkably well.

Such is the complex task of understanding when the scale of knowledge is so unfathomably huge. LLMs reflect the collective information we have published, including positive and negative aspects. Understanding the broad strokes of AGI can and will change the landscape of politics and even human thought. The open nature of AGI means that anyone can access it with relative ease, even hostile foreign powers. Therefore, our problem is familiar: how to extract the desirable outcome into the realm of the possible, and how to plan and cope with potential shock to labor markets.

Footnote: I will include sources of information used to form my opinion in posts from now on. posts will take longer, but it would be right to include the sources influencing my thinking.
Critical feedback is appreciated. Thank you, especially for your post, @Drain Todger , I will address your highlighted points in future posts.
 
  • Like
Reactions: Drain Todger
Back