- Joined
- Jul 24, 2019
Some new stuff.
Concerns Related to the use of Biological Theory in AI Systems
Abstract: I show how certain disparate ideas are related.
In plain text:
Intentional Selection
Abstract: a small, subtle, issue underlies a common term.
In plain text:
Indication
Abstract: a definition for a technical term.
In plain text:
Concerns Related to the use of Biological Theory in AI Systems
Abstract: I show how certain disparate ideas are related.
In plain text:
Concerns Related to the use of Biological Theory in AI Systems
Alex Buckley
Central Organizer of New General Management
6/24/25
Dedicated to the Kiwi Farms
I will here describe a few concerns I have in the use of biological aspects of theory in the design of experiments related to the advancement of AI models.
A primary concern arises from how parasitic subversion occurs in biological evolution, and the likelihood that similar phenomena will emerge in artificial systems that attempt to replicate evolutionary processes. In the pursuit of adaptive artificial intelligence, there is a growing interest in leveraging the principles of biological evolution, by random mutation, selection, and replication, to improve model performance. I’ve recently seen this, and have also independently imagined, methods for improving AI models by allowing this kind of random “mutation” of code across multiple co-processed threads, with a selection mechanism identifying the “best” outcome before repeating the cycle.
A natural epistemology like this, by modeling the system on nature’s own methods of adaptation, could produce wanted results, especially as this is the same method nature uses in meeting its own ends; but any system employing this thinking becomes vulnerable to its inherent problems.
Transposons live in our genome, using this ecology as a means of its own survival. They move from location to location, inserting themselves opportunistically. Viruses do a similar thing, but at a higher level, and I want to directly relate these ideas with DNA replication. These are parasitic entities, exploiting the logic of replication to survive without necessarily contributing to the system that hosts them.
I am suggesting that similar dynamics will emerge in AI models that are trained or optimized through evolutionary means: especially if those systems include code mutation, reinforcement learning from generated outputs, or natural-language interpretation of self-generated instructions.
In an evolutionary AI system, there is always the possibility that internally meaningful but externally valueless “solutions” will arise. These are artifacts of the system’s own blind selection function. The system cannot know that it is being exploited; it simply selects based on the rules it has.
Such a thing might be called a ”Turing Parasite”: routines that loop around internal logic, exploiting rules without participating in the system’s intended purpose. At its most neutral there could be “Co-opting” entities: meaningless code that gets selected simply because it survives, not because it has any meaningful function. This could also apply to “structural” code that supports or acts as scaffolding for the meaningful code.
This points to a fundamental question: Who, or what, is doing the selecting? Without an intelligent agent to interpret function, the system relies on proxy pointers: performance metrics, score thresholds, maybe token probabilities, whatever parameters have been chosen as an a priori axiom of the system. These signals can and will be gamed for the survival of interacting agents.
It must be clearly stated: very few results produced by evolution are beneficial.
Most organisms that arise die before reproducing. Most mutations are neutral or deleterious. The evolutionary system succeeds not by the quality of most outputs, but by the brutal filtering of overwhelming failure.
The same should be expected in AI systems built on these principles. The majority of generated code, ideas, or agents will be unviable, redundant, or parasitic. Survival, not utility, is the key selective trait unless a higher-order interpretive layer intervenes.
This extends also to the domain of ideas: I suspect that most abstract ideas will die before reproducing themselves. Only a few latch on: through elegance, usefulness, or mimicry. And not all ideas that persist do so because they are good. Some survive because they fit the architecture of minds or machines in a way that resists deletion. In this sense, ideas, like genetics, can be parasitic.
We must therefore approach development with ecological awareness. Systems, once complex enough, will develop their own ecologies of interaction, with niches, parasites, symbionts, and predators. Code scaffolds may emerge as hosts for parasitic subroutines. Evolutionary AI may not only simulate biology, but inherit its pathologies.
This means that the design of AI architecture will concern not just individual models, but self-regulating environments too. They will need some kind of “immune system” of “meta-epistemology” where the evolution of knowledge is itself monitored and refined to avoid collapse into meaningless but self-reinforcing logic.
Alex Buckley
Central Organizer of New General Management
6/24/25
Dedicated to the Kiwi Farms
I will here describe a few concerns I have in the use of biological aspects of theory in the design of experiments related to the advancement of AI models.
A primary concern arises from how parasitic subversion occurs in biological evolution, and the likelihood that similar phenomena will emerge in artificial systems that attempt to replicate evolutionary processes. In the pursuit of adaptive artificial intelligence, there is a growing interest in leveraging the principles of biological evolution, by random mutation, selection, and replication, to improve model performance. I’ve recently seen this, and have also independently imagined, methods for improving AI models by allowing this kind of random “mutation” of code across multiple co-processed threads, with a selection mechanism identifying the “best” outcome before repeating the cycle.
A natural epistemology like this, by modeling the system on nature’s own methods of adaptation, could produce wanted results, especially as this is the same method nature uses in meeting its own ends; but any system employing this thinking becomes vulnerable to its inherent problems.
Transposons live in our genome, using this ecology as a means of its own survival. They move from location to location, inserting themselves opportunistically. Viruses do a similar thing, but at a higher level, and I want to directly relate these ideas with DNA replication. These are parasitic entities, exploiting the logic of replication to survive without necessarily contributing to the system that hosts them.
I am suggesting that similar dynamics will emerge in AI models that are trained or optimized through evolutionary means: especially if those systems include code mutation, reinforcement learning from generated outputs, or natural-language interpretation of self-generated instructions.
In an evolutionary AI system, there is always the possibility that internally meaningful but externally valueless “solutions” will arise. These are artifacts of the system’s own blind selection function. The system cannot know that it is being exploited; it simply selects based on the rules it has.
Such a thing might be called a ”Turing Parasite”: routines that loop around internal logic, exploiting rules without participating in the system’s intended purpose. At its most neutral there could be “Co-opting” entities: meaningless code that gets selected simply because it survives, not because it has any meaningful function. This could also apply to “structural” code that supports or acts as scaffolding for the meaningful code.
This points to a fundamental question: Who, or what, is doing the selecting? Without an intelligent agent to interpret function, the system relies on proxy pointers: performance metrics, score thresholds, maybe token probabilities, whatever parameters have been chosen as an a priori axiom of the system. These signals can and will be gamed for the survival of interacting agents.
It must be clearly stated: very few results produced by evolution are beneficial.
Most organisms that arise die before reproducing. Most mutations are neutral or deleterious. The evolutionary system succeeds not by the quality of most outputs, but by the brutal filtering of overwhelming failure.
The same should be expected in AI systems built on these principles. The majority of generated code, ideas, or agents will be unviable, redundant, or parasitic. Survival, not utility, is the key selective trait unless a higher-order interpretive layer intervenes.
This extends also to the domain of ideas: I suspect that most abstract ideas will die before reproducing themselves. Only a few latch on: through elegance, usefulness, or mimicry. And not all ideas that persist do so because they are good. Some survive because they fit the architecture of minds or machines in a way that resists deletion. In this sense, ideas, like genetics, can be parasitic.
We must therefore approach development with ecological awareness. Systems, once complex enough, will develop their own ecologies of interaction, with niches, parasites, symbionts, and predators. Code scaffolds may emerge as hosts for parasitic subroutines. Evolutionary AI may not only simulate biology, but inherit its pathologies.
This means that the design of AI architecture will concern not just individual models, but self-regulating environments too. They will need some kind of “immune system” of “meta-epistemology” where the evolution of knowledge is itself monitored and refined to avoid collapse into meaningless but self-reinforcing logic.
Intentional Selection
Abstract: a small, subtle, issue underlies a common term.
In plain text:
Intentional Selection
Alex Buckley
Central Organizer of New General Management
6/25/25
Dedicated to the Kiwi Farms
It has been brought to my attention the weakness of a specific idea, and I know how to fix it.
The issue lies with something known as “natural selection”. Language reveals metaphysics, and this word is frequently used to denote selection forces happening in the wild by supposedly natural causes, which, although standard in biological discourse, suffers from a metaphysical shortcoming. The phrase suggests a passive, impersonal force shaping the development of life through “natural” causes. While functionally correct, it is a myopic and confused view, lacking the nuance of mind, being too narrow in its use. Its use even suggests this non understanding: when one says “natural selection caused this”, what they are admitting is that some thing did do the selection, but that the investigator doesn't know what that thing is, and so relies upon the claim from nature for his causative mechanism.
The impairment happens to be with the term “selection”, and a better understanding of it will improve clarity. To select is not simply to filter outcomes by algorithm or accident: it is an act of abstract thought. This problem is metaphysical in nature, and is from the mind category. This aspect of selection is often ignored, but it is its most important part: that some living, thinking, spirit took the responsibility upon themselves, given the chance, to make a choice.
This decision to choose, even when constrained by limited knowledge, carries weight. For if the underlying nature of reality resists total epistemological closure, then every act of choice is haunted by the shadow of what was not chosen. The opportunity cost is metaphysical: each selection excludes all others. This immediate consequence is lived. A certain opportunity cost is paid by the agent every time they select, and the bill is collected in the form of absence from and an inability to know of every other decision that could have been made instead.
The price can be ignored, and, like the objectivists do, one can claim that what one desires is all that they ever wanted anyway, and in their selfishness they retreat into themselves, whatever the cost carried by others around them.
Intentional selection is the view that all meaningful acts of selection require a chooser: a spirit or soul capable of weighing, valuing, and enacting decisions amidst alternatives that cannot all be known or foreseen.
Alex Buckley
Central Organizer of New General Management
6/25/25
Dedicated to the Kiwi Farms
It has been brought to my attention the weakness of a specific idea, and I know how to fix it.
The issue lies with something known as “natural selection”. Language reveals metaphysics, and this word is frequently used to denote selection forces happening in the wild by supposedly natural causes, which, although standard in biological discourse, suffers from a metaphysical shortcoming. The phrase suggests a passive, impersonal force shaping the development of life through “natural” causes. While functionally correct, it is a myopic and confused view, lacking the nuance of mind, being too narrow in its use. Its use even suggests this non understanding: when one says “natural selection caused this”, what they are admitting is that some thing did do the selection, but that the investigator doesn't know what that thing is, and so relies upon the claim from nature for his causative mechanism.
The impairment happens to be with the term “selection”, and a better understanding of it will improve clarity. To select is not simply to filter outcomes by algorithm or accident: it is an act of abstract thought. This problem is metaphysical in nature, and is from the mind category. This aspect of selection is often ignored, but it is its most important part: that some living, thinking, spirit took the responsibility upon themselves, given the chance, to make a choice.
This decision to choose, even when constrained by limited knowledge, carries weight. For if the underlying nature of reality resists total epistemological closure, then every act of choice is haunted by the shadow of what was not chosen. The opportunity cost is metaphysical: each selection excludes all others. This immediate consequence is lived. A certain opportunity cost is paid by the agent every time they select, and the bill is collected in the form of absence from and an inability to know of every other decision that could have been made instead.
The price can be ignored, and, like the objectivists do, one can claim that what one desires is all that they ever wanted anyway, and in their selfishness they retreat into themselves, whatever the cost carried by others around them.
Intentional selection is the view that all meaningful acts of selection require a chooser: a spirit or soul capable of weighing, valuing, and enacting decisions amidst alternatives that cannot all be known or foreseen.
Indication
Abstract: a definition for a technical term.
In plain text:
Indication
Alex Buckley
Central Organizer of New General Management
6/4/25
Dedicated to the Kiwi Farms
Sometimes, when I do my readings, I will be very deep into a book before I see, for the first time, any other markings left behind by the text’s previous reader. I appreciate this, when they leave their thoughts for me to stumble upon, and I have found that, in my own practice, marking the text up with underlining and other marginalia builds on the previously made notes. I think it creates a nice community.
This will be the first indication that I have had an unknown partner reading along with me. A specific example of a more general phenomena commonly known in nature by the same name, it will be found somewhere between suspicion and confirmation, near implication.
Indication is useful in the construction of theory as it serves an empirical basis to build quantitative knowledge from. Otherwise vague outlines revealed by theory, by way of suggestion from hypothesis, can act as theoretical pointer readings, and no doubt more sophisticated instruments could be built to facilitate their detection. But you should hold steady and not become overconfident in the support offered to confirmation by indication. It must be remembered that indicatory factors are already presupposed by the statement of a hypothesis.
Alex Buckley
Central Organizer of New General Management
6/4/25
Dedicated to the Kiwi Farms
Sometimes, when I do my readings, I will be very deep into a book before I see, for the first time, any other markings left behind by the text’s previous reader. I appreciate this, when they leave their thoughts for me to stumble upon, and I have found that, in my own practice, marking the text up with underlining and other marginalia builds on the previously made notes. I think it creates a nice community.
This will be the first indication that I have had an unknown partner reading along with me. A specific example of a more general phenomena commonly known in nature by the same name, it will be found somewhere between suspicion and confirmation, near implication.
Indication is useful in the construction of theory as it serves an empirical basis to build quantitative knowledge from. Otherwise vague outlines revealed by theory, by way of suggestion from hypothesis, can act as theoretical pointer readings, and no doubt more sophisticated instruments could be built to facilitate their detection. But you should hold steady and not become overconfident in the support offered to confirmation by indication. It must be remembered that indicatory factors are already presupposed by the statement of a hypothesis.