How to get into AI?

It seems like a lot of models are fundamentally focused on presenting outputs as ‘human’. I think I’m more interested in developing machines that learn to be more capable at being machines. For example, there is AI that will generate code snippets, but what about an AI that will develop it’s own bytecode and the native machine code to run it on? It doesn’t need to consider the presentation of its output as human or the human readability aspects. Is there any research in this area?
 
  • Agree
Reactions: Higgs Bonbon
Are you referring to neural networks inspired by biology directly? That could be interesting. However, the trend in computing is to take what is real and just run away with it. Genetic algorithms are a good example. They start with actual biology and introduce stuff like shuffling the alleles so that they solve tasks like the Traveling Salesman Problem. Think Lucas Werner but actually brilliant.
The Neural networks we use now were already directly inspired by biology. As time went on the priorities shifted from mimicry to doing what worked.
I do mean simulated biology, pretty much in the most literal sense. Neural networks are ok at doing one single thing, much like your lizard brain is exceedingly good at making your lungs breathe, or fungi are good at breaking down detritus, but if you want a system that's more than a one trick pony (and thus isn't prone to dead ending) you need to give it a drive, a way to process stimuli, and a survival instinct. I've been following this topic long before the ML boom and I'm still of the opinion that Creatures 3 was the closest software came to emulating an intelligent, adaptive system, and if you've ever played it you'd know Norns are catastrophically retarded anyways.

To create true AGI, I believe one needs to evolve the sapient mind from the ground up, not emulating it from the top down.
 
It seems like a lot of models are fundamentally focused on presenting outputs as ‘human’. I think I’m more interested in developing machines that learn to be more capable at being machines. For example, there is AI that will generate code snippets, but what about an AI that will develop it’s own bytecode and the native machine code to run it on? It doesn’t need to consider the presentation of its output as human or the human readability aspects. Is there any research in this area?
Not to my knowledge, but the development of something more akin to an informorph organism would be both exciting and inherently unpredictable, and given the ML landscape is more focused on what amount to parlor tricks I'm sure researchers (or at least their benefactors) have little interest in making the software equivalent of wild animals.
 
  • Like
Reactions: George Lucas
It seems like a lot of models are fundamentally focused on presenting outputs as ‘human’. I think I’m more interested in developing machines that learn to be more capable at being machines. For example, there is AI that will generate code snippets, but what about an AI that will develop it’s own bytecode and the native machine code to run it on? It doesn’t need to consider the presentation of its output as human or the human readability aspects. Is there any research in this area?


There are instruct models that are customized to be chatbots like chatgpt but theres nothing about the underlying architecture that would limit it to this. There are in fact 'raw' llms that just autocomplete prompts. llms trained specifically to generate code, others trained to generate stories
 
  • Agree
Reactions: Another Char Clone
Does that matter?
The licenses for use on many of these popular and well rounded innovations have a lot of terms and conditions to consider. Most training data and the models themselves originate from closed source software. It's Apache 2.0 but even for stuff like llama or llava there is additional hurdles that specifically reference big tech restrictions on research and commercial use.
 
I do mean simulated biology, pretty much in the most literal sense. Neural networks are ok at doing one single thing, much like your lizard brain is exceedingly good at making your lungs breathe, or fungi are good at breaking down detritus, but if you want a system that's more than a one trick pony (and thus isn't prone to dead ending) you need to give it a drive, a way to process stimuli, and a survival instinct. I've been following this topic long before the ML boom and I'm still of the opinion that Creatures 3 was the closest software came to emulating an intelligent, adaptive system, and if you've ever played it you'd know Norns are catastrophically retarded anyways.

To create true AGI, I believe one needs to evolve the sapient mind from the ground up, not emulating it from the top down.
You might be interested in embodied cognition
 
  • Agree
Reactions: Higgs Bonbon
step 1: get those high-end graphics card.
1711414427501.png

step 2: get software to run AI.

step 3: enjoy!
 
I had the concept in mind, but more often than not the exact terminology escapes me, even after reading about it several times—I'm glad we were thinking of the same thing.
So basically there's a stance in philosophy of mind that embodied cognition is required to solve the so-called "symbol grounding problem", which concerns how an artificial intelligence is supposed to figure things out in the world without a massive infinite regress (like "turtles all the way down"). I myself am currently neutral on that but it sure seems like the most practical way to create artificial general intelligence.
 
Back