llms have that classic problem of looking really good in a demo but when you try to make them work in practice you get assraped into oblivion
we are probably exactly as close to artificial general intelligence as we were in 1960
I don't know about that. I've read an awful lot about the subject matter and its history. I read pretty much all of
Mind as Machine, a comprehensive history of cognitive science in two volumes, ~1600 pages altogether. When I emailed the author, the researcher Margaret Boden, about how recursion need not gobble up more memory with each call because of the possibility of tail recursion, she was very grateful and mistook me for a doctor. (That was a very flattering moment.)
Point is, I know a lot about this subject and have thought much about it. AI has improved to a staggering degree and not only because of hardware speedups. LLMs can be very frustrating and wrong but they became mostly coherent over multiple paragraphs in a few short years (even if claiming cucked bullshit like how DeepSeek says China is not harvesting organs from its minorities). Anyone can install the free and open source chess engine Stockfish that plays a far better game than any human on very modest consumer hardware, whereas when Deep Blue beat Garry Kasparov, it required a supercomputer. Maybe most importantly, embodied AI (robots) have become increasingly capable of handling the highly complex and uncertain nature of our physical world:
Having said that, I do think AI is oversold. We are heading maybe not for an AI winter, like in the past, but an AI autumn. There is a hype bubble that is going to burst in a spectacular fashion (but I hope it was enough to convince the world that nuclear power should be the backbone of our power system).
Futureproof your workplace by writing everything in obfuscated Lisp with heavy macro usage and only using self-made, in-house data formats and configs.
I think Raku (formerly Perl 6) would be another fantastic option
AI code doesn't scale, at all. Without meticulous prompting about internal API and conventions, you're going to end up writing more code gluing shit together than anything else. It's been trained on a mishmash of codebases and more importantly, on forum questions and answers where people are typically strongly encouraged to write minimal snippets.
LLMs' strength is pulling obscure shit from all corners of the Internet and writing very insightful small amounts of code that can easily be inspected by a competent human user without being an obtuse rude piece of shit like on Stack Overflow and other Stack Exchange sites. For the foreseeable future, that is the only kind of code I will let them write for me.
I also take great issue with how LLMs operate on this assumption that everything is sequential in nature. Software naturally is structured like a tree, not a sequence.
I haven't done any deep learning yet. "Glorified Markov chain" is too extreme but do they really have no recursive structures?
I think LLMs have some value in natural language processing and generation, but I'm thoroughly convinced that the transformer being O(n^2) on the vocabulary is a gigantic waste now, and will be looked upon in the coming decades as a laughably inefficient stopgap.
What type of complexity do you think the state of the art could evolve to in the future?
These massive, baked in models that have to be re-tuned per task are utterly retarded. I wish people would revisit older AI tech like using self-modifying LISPs as opposed to bringing GPUs close to their melting points.
Both this post and at least one after it mentioned the merits of symbolic vs. sub-symbolic (which includes neural networks) AI. I think you guys might be interested in the following book:
Author speculates about fusing the two aforementioned approaches to AI. As I understand it, that's already how the most capable artificially intelligent vehicles and other robots already do things.
EDIT: book came out over 10 years ago ... damn I'm old