"Concerns Related to the use of Biological Theory in AI Systems", "Intentional Selection", and "Indication"

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.

LatinasAreTheFuture

Supreme Leader of Greater Muttistan
kiwifarms.net
Joined
Jul 24, 2019
Some new stuff.

Concerns Related to the use of Biological Theory in AI Systems
Abstract: I show how certain disparate ideas are related.
In plain text:
Concerns Related to the use of Biological Theory in AI Systems

Alex Buckley
Central Organizer of New General Management

6/24/25

Dedicated to the Kiwi Farms

I will here describe a few concerns I have in the use of biological aspects of theory in the design of experiments related to the advancement of AI models.

A primary concern arises from how parasitic subversion occurs in biological evolution, and the likelihood that similar phenomena will emerge in artificial systems that attempt to replicate evolutionary processes. In the pursuit of adaptive artificial intelligence, there is a growing interest in leveraging the principles of biological evolution, by random mutation, selection, and replication, to improve model performance. I’ve recently seen this, and have also independently imagined, methods for improving AI models by allowing this kind of random “mutation” of code across multiple co-processed threads, with a selection mechanism identifying the “best” outcome before repeating the cycle.

A natural epistemology like this, by modeling the system on nature’s own methods of adaptation, could produce wanted results, especially as this is the same method nature uses in meeting its own ends; but any system employing this thinking becomes vulnerable to its inherent problems.

Transposons live in our genome, using this ecology as a means of its own survival. They move from location to location, inserting themselves opportunistically. Viruses do a similar thing, but at a higher level, and I want to directly relate these ideas with DNA replication. These are parasitic entities, exploiting the logic of replication to survive without necessarily contributing to the system that hosts them.

I am suggesting that similar dynamics will emerge in AI models that are trained or optimized through evolutionary means: especially if those systems include code mutation, reinforcement learning from generated outputs, or natural-language interpretation of self-generated instructions.

In an evolutionary AI system, there is always the possibility that internally meaningful but externally valueless “solutions” will arise. These are artifacts of the system’s own blind selection function. The system cannot know that it is being exploited; it simply selects based on the rules it has.
Such a thing might be called a ”Turing Parasite”: routines that loop around internal logic, exploiting rules without participating in the system’s intended purpose. At its most neutral there could be “Co-opting” entities: meaningless code that gets selected simply because it survives, not because it has any meaningful function. This could also apply to “structural” code that supports or acts as scaffolding for the meaningful code.

This points to a fundamental question: Who, or what, is doing the selecting? Without an intelligent agent to interpret function, the system relies on proxy pointers: performance metrics, score thresholds, maybe token probabilities, whatever parameters have been chosen as an a priori axiom of the system. These signals can and will be gamed for the survival of interacting agents.

It must be clearly stated: very few results produced by evolution are beneficial.
Most organisms that arise die before reproducing. Most mutations are neutral or deleterious. The evolutionary system succeeds not by the quality of most outputs, but by the brutal filtering of overwhelming failure.

The same should be expected in AI systems built on these principles. The majority of generated code, ideas, or agents will be unviable, redundant, or parasitic. Survival, not utility, is the key selective trait unless a higher-order interpretive layer intervenes.

This extends also to the domain of ideas: I suspect that most abstract ideas will die before reproducing themselves. Only a few latch on: through elegance, usefulness, or mimicry. And not all ideas that persist do so because they are good. Some survive because they fit the architecture of minds or machines in a way that resists deletion. In this sense, ideas, like genetics, can be parasitic.

We must therefore approach development with ecological awareness. Systems, once complex enough, will develop their own ecologies of interaction, with niches, parasites, symbionts, and predators. Code scaffolds may emerge as hosts for parasitic subroutines. Evolutionary AI may not only simulate biology, but inherit its pathologies.

This means that the design of AI architecture will concern not just individual models, but self-regulating environments too. They will need some kind of “immune system” of “meta-epistemology” where the evolution of knowledge is itself monitored and refined to avoid collapse into meaningless but self-reinforcing logic.

Intentional Selection
Abstract: a small, subtle, issue underlies a common term.
In plain text:
Intentional Selection

Alex Buckley
Central Organizer of New General Management

6/25/25

Dedicated to the Kiwi Farms

It has been brought to my attention the weakness of a specific idea, and I know how to fix it.

The issue lies with something known as “natural selection”. Language reveals metaphysics, and this word is frequently used to denote selection forces happening in the wild by supposedly natural causes, which, although standard in biological discourse, suffers from a metaphysical shortcoming. The phrase suggests a passive, impersonal force shaping the development of life through “natural” causes. While functionally correct, it is a myopic and confused view, lacking the nuance of mind, being too narrow in its use. Its use even suggests this non understanding: when one says “natural selection caused this”, what they are admitting is that some thing did do the selection, but that the investigator doesn't know what that thing is, and so relies upon the claim from nature for his causative mechanism.

The impairment happens to be with the term “selection”, and a better understanding of it will improve clarity. To select is not simply to filter outcomes by algorithm or accident: it is an act of abstract thought. This problem is metaphysical in nature, and is from the mind category. This aspect of selection is often ignored, but it is its most important part: that some living, thinking, spirit took the responsibility upon themselves, given the chance, to make a choice.

This decision to choose, even when constrained by limited knowledge, carries weight. For if the underlying nature of reality resists total epistemological closure, then every act of choice is haunted by the shadow of what was not chosen. The opportunity cost is metaphysical: each selection excludes all others. This immediate consequence is lived. A certain opportunity cost is paid by the agent every time they select, and the bill is collected in the form of absence from and an inability to know of every other decision that could have been made instead.

The price can be ignored, and, like the objectivists do, one can claim that what one desires is all that they ever wanted anyway, and in their selfishness they retreat into themselves, whatever the cost carried by others around them.

Intentional selection is the view that all meaningful acts of selection require a chooser: a spirit or soul capable of weighing, valuing, and enacting decisions amidst alternatives that cannot all be known or foreseen.

Indication
Abstract: a definition for a technical term.
In plain text:
Indication

Alex Buckley
Central Organizer of New General Management

6/4/25

Dedicated to the Kiwi Farms

Sometimes, when I do my readings, I will be very deep into a book before I see, for the first time, any other markings left behind by the text’s previous reader. I appreciate this, when they leave their thoughts for me to stumble upon, and I have found that, in my own practice, marking the text up with underlining and other marginalia builds on the previously made notes. I think it creates a nice community.

This will be the first indication that I have had an unknown partner reading along with me. A specific example of a more general phenomena commonly known in nature by the same name, it will be found somewhere between suspicion and confirmation, near implication.

Indication is useful in the construction of theory as it serves an empirical basis to build quantitative knowledge from. Otherwise vague outlines revealed by theory, by way of suggestion from hypothesis, can act as theoretical pointer readings, and no doubt more sophisticated instruments could be built to facilitate their detection. But you should hold steady and not become overconfident in the support offered to confirmation by indication. It must be remembered that indicatory factors are already presupposed by the statement of a hypothesis.
 

Attachments

when one says “natural selection caused this”, what they are admitting is that some thing did do the selection, but that the investigator doesn't know what that thing is,
In the case of natural selection it's always a matter of genetic adaptivity and/or coincidence.
There being a selection doesn´t necessarily mean there's some external force selecting.
 
Last edited:

This is a cool idea for a sci-fi novel, but that's about it. Evolutionary methods are old, older than even backpropagation in machine learning and artificial intelligence. All of these problems, of useless code and negative changes are largely solved issues, and even if they weren't, the current paradigm relies on training models in total isolation from everything else. There is no environment except the tiny bubble a model is trained in. All models are effectively "de novo" unless specifically trained to modify another model such as in the case of Hypernetworks, Control Networks, etc. There is no interdependency between models unless they are specifically trained for the case, such as in the dubious "AI develops an informal language" study. Even when AI-generated data is used to train another model, the latent variables are lost in the translation from model to training medium back to model. And with very popular techniques such as layer, unit or prompt dropout, of advanced adaptive training optimizers, and of model pruning techniques, the perceived danger from evolutionary training methods is reduced to null as any 'inefficient' or outright hostile results are nipped in the bud.

Similarly, 'selection' is largely a solved issue. "Loss" is the selection factor for ordinary training, in other words the difference from trained data. The only balance is between training enough and training too much that the resulting model cannot generalize. For evolutionary training it's "fitness", or a specific statistic that one optimizes for against everything else. There, it's likely the most important hyperparameter and the simplest to mess up, especially with complex models that don't optimize for easily visible results (like distance traveled in an evolutionary simulator), but even then, there's nothing simpler than discarding the model and trying again. Evolutionary methods tend to train multiple models at the same time in compressed time and selecting only the X% best after a certain time, hundreds of models get discarded every time. One should also note that evolutionary methods almost never manipulate a model's code itself – that'd just result in endless compile errors. They instead manipulate a specific for-purpose system of traits that can be treated as DNA, or a set of variables that can be tweaked, like hyperparameters.

 
Last edited:
This means that the design of AI architecture will concern not just individual models, but self-regulating environments too. They will need some kind of “immune system” of “meta-epistemology” where the evolution of knowledge is itself monitored and refined to avoid collapse into meaningless but self-reinforcing logic.
In other words, let's put in safety measures before it turns into Skynet.

Also I think we're pretty far from code spontaneously 'mutating'. Unlike organisms, you can switch it off and fix it.
 
In other words, let's put in safety measures before it turns into Skynet.

Also I think we're pretty far from code spontaneously 'mutating'. Unlike organisms, you can switch it off and fix it.
The safety measures are already there, inherent to the very design of basically every modern deep neural network that exists. They are:
1. Isolated
2. Frozen
3. Discrete
Isolated means that each and every model is its own self-contained thing. You can go to huggingface right now and download one of tens of thousands of models for various purposes, meant to plug into specific software with a specific surface. Being "frozen" means that there are two distinct modes of operation: training and inference. When you download a model, you're usually using it explicitly for inference, in which the internal connections of the model cannot change in any way, and it can only "learn" for as long as you keep it turned on and, in the case of LLMs, give it context. And discrete means that it operates in concrete 'steps' – when a calculation is done, it's done. For LLMs, those steps are individual tokens. For U-nets, those steps are denoising steps. For classifiers, those steps are the feed-forward classification. They're all very finite and have essentially no recursion.

Combined, they cannot: be in a shared environment unless deliberately put there (like in an 'arena' usually used for reinforcement learning), evolve or change or self-regulate even unmonitored, do not have the ability to self-reflect and iterate on their own decisions unless allowed (such as in the case of LLMs with context), and cannot even run by themselves because they have to be invoked by a task, a prompt, a classification, for which they run for only a finite amount of steps.

Until someone hooks an LLM up to a commandline with infinite looping and regular re-training with the outputs it has generated, all of these prevent any artificial intelligence model from doing basically anything it wasn't meant to. And, guess what – people have done most of that already, sans maybe the retraining (and therefore any evolutionary parts). And the resulting agents are really, really stupid and cannot do basically any task well. You have to try very, very, very hard to get something even remotely dangerous out of any model, and even then it's usually for the sole purpose of attention-grabbing headlines where dipshit Google employees worship AI gods and where AI "blackmails" people when absolutely forced to.
 
In the case of natural selection it's always a matter of genetic adaptivity and/or coincidence.
There being a selection doesn´t necessarily mean there's some external force selecting.
it might not. Maybe the asteroid colliding into the planet had no intentional properties... or maybe it was sent on purpose. It would be difficult to know. We do tend to hold drivers responsible when they hit pedestrians, although there might not be premeditated intention involved.
 
Back