Business The great AI delusion is falling apart - New research suggests the chorus of techno-optimism is based on falsehoods

Link

By Andrew Orlowski, The Telegraph
14 July 2025 11:00am BST


Is the secret of artificial intelligence that we have to kid ourselves, like an audience at a magic show?
Some fascinating new research suggests that self-deception plays a key role in whether AI is perceived to be a success or a dud.
In a randomised controlled trial – the first of its kind – experienced computer programmers could use AI tools to help them write code. What the trial revealed was a vast amount of self-deception.
“The results surprised us,” research lab METR reported. “Developers thought they were 20pc faster with AI tools, but they were actually 19pc slower when they had access to AI than when they didn’t.”

In reality, using AI made them less productive: they were wasting more time than they had gained. But what is so interesting is how they swore blind that the opposite was true.
If you think AI is helping you in your job, perhaps it’s because you want to believe that it works.
Since OpenAI’s ChatGPT was thrown open to the general public in late 2022, pundits have been forecasting huge productivity gains from deploying AI. They hope that it will supercharge growth and boost GDP. This has become the default opinion in high-status policy circles.
But all this techno-optimism is founded on delusion. The “lived experience” of using real tools in the real world paints a very different picture.

The past few days have felt like a turning point, as the reluctance of pointing out the emperor’s new clothes diminishes.
“I build AI agents for a living, it’s what I do for my clients,” wrote one Reddit user. “The gap between the hype and what’s actually happening on the ground is turning into a canyon”
AI isn’t reliable enough to do the job promised. According to an IBM survey of 2,000 chief executives, three out of four AI projects have failed to show a return on investment, which is a remarkably high failure rate.

Don’t hold your breath for a white-collar automation revolution either: AI agents fail to complete the job successfully about 65 to 70pc of the time, according to a study by Carnegie Mellon University and Salesforce.
The analyst firm Gartner Group has concluded that “current models do not have the maturity and agency to autonomously achieve complex business goals or follow nuanced instructions over time.” Gartner’s head of AI research Erick Brethenoux says: “AI is not doing its job today and should leave us alone”.
It’s no wonder that companies such as Klarna, which laid off staff in 2023 confidently declaring that AI could do their jobs, are hiring humans again.
This is extraordinary, and we can only have reached this point because of a historic self-delusion. People will even pledge their faith to AI working well despite their own subjective experience to the contrary, the AI critic Professor Gary Marcus noted last week.
“Recognising that it sucks in your own speciality, but imagining that it is somehow fabulous in domains you are less familiar with”, is something he calls “ChatGPT blindness”.

Much of the news is misleading. Firms are simply using AI as an excuse for retrenchment. Cost reduction is the big story in business at the moment.
Globally, President Trump’s erratic behaviour has induced caution, while in the UK, business confidence is at “historically depressed levels”, according to the Institute of Directors, reeling from Reeves’s autumn taxes. Attributing those lay-offs to technology is simply clever PR, and helps boost the share price.
So why does the faith in AI remain so strong?
The dubious hype doesn’t help. Every few weeks a new AI model appears, and smashes industry benchmarks. xAI’s Grok 4 did just that last week. But these are deceptive and simply provide more confirmation bias.
“Every single one of them has been wide of that mark. And not one has resolved hallucinations, alignment issues or boneheaded errors,” says Marcus.
Not only is generative AI unreliable, but it can’t reason, as a recent demonstration showed: OpenAI’s latest ChatGPT4o model was beaten by an 8-bit Atari home games console made in 1977.

“Reality is the ultimate benchmark for AI,” explained Chomba Bupe, a Zambian AI developer, last week. “You not going to declare that you have built intelligence by beating toy benchmarks … What’s the point of getting say 90pc on some physics benchmarks yet be unable to do any real physics?” he asked.
Then there are thousands of what I call “wowslop” accounts – social media feeds that declare amazement at breakthroughs. As well as the vendors, a lot of shadowy influence money is being spent on maintaining the hype.
This is not to say there aren’t uses for generative AI: Anthropic has hit $4bn (£3bn) in annual revenue. For some niches, like language translation and prototyping, it’s here to stay. Before it went mad last week, X’s Grok was great at adding valuable context.
But even if AI “discovers” new materials or medicines tomorrow, that won’t compensate for the trillion dollars that Goldman Sachs estimates business has already wasted on this generation of dud AI.
That’s capital that could have been invested far more usefully. Rather than an engine of progress, poor AI could be the opposite.
METR added an amusing footnote to their study. The researchers used one other control group in its productivity experiment, and this group made the worst, over-optimistic estimates of all. They were economists.
 
Gee, it's almost like having to wait for the LLM to generate a response, and then DEBUG that response whenever it gets something wrong, and sometimes even your boss forcing your workload around using prompt engineering instead of actual programming is a lot slower than actually being an experienced programmer that's actually worth your salary.
Not that it matters too much anyway. Mr. Shekelstein already knew this, but not wanting to let a good crisis go to waste, he blamed laying people off and importing more H1Bs on the LLM bubble.
 
“Reality is the ultimate benchmark for AI,” explained Chomba Bupe, a Zambian AI developer, last week. “You not going to declare that you have built intelligence by beating toy benchmarks … What’s the point of getting say 90pc on some physics benchmarks yet be unable to do any real physics?” he asked.
they found the only zambian that can read for this interview
 
Machine learning tech is in a bubble just like PCs in the late 90s, and whatever is left is basically only going to be used to make life more hellish for the common man, more propagandized, and more psyop'd by the niggers running things like machine learning search engines. Shit is tailored and controlled from top to bottom to only give the goyim the most Kosher of results, and no one will be able to get to the knowledge of the truth.
 
The problem with alot of AI interactions around creating things like code is the natural ambiguity of human language. If you create a system that takes in vague error-prone human input as instructions to build something, it tends to often replicate the same errors that humans themselves make due to miscommunications.

The other delusion that is about collapse is the near-universal belief that throwing more processing scale at AI training produces better results. It usually doesn't. More learning doesn't improve the overall result. That should have been obvious to many people but it wasn't.

The biggest problem I've personally seen is that these systems tend to reflect the user back on themselves in a way that makes them irrationally attached to the interaction with the AI. What they often see as signs of intelligence is just themselves being reflected back at them.

The effects on human communications, on speech generation, audio manipulation and graphical manipulation associated with AI are somewhat revolutionary. But many of the other claims have always been mostly nonsense.

In a randomised controlled trial – the first of its kind – experienced computer programmers could use AI tools to help them write code. What the trial revealed was a vast amount of self-deception.

One thing to note here. The rise of open software has decreased the number of people actually capable of writing code significantly. Most people who describe themselves as "writing code" these days are simply doing integration of piles of open software packages and could not be trusted to write any significant actual code anyway.
 
I can see AI being a good resource for writers and artists who want some ideas to build off of, or for people too cheap to buy custom-made internet porn and who just want to produce their own. Anyone else is probably just going to use it for awhile then abandon it when it doesn't poop unicorn gold the way they thought it would.
 
a lot slower than actually being an experienced programmer
The point with machine learning algorithms and these "AI's" is to give people around the middle of the bell curve an "IQ buff", enough for those sligthly above the midwit level to get competition (kek, get fucked slightly above mediocre people), or for niggers to get into white collar jobs (barely crossing the lowest bar thanks to AI). That's how politicians, corporations and (((subversive groups))) see it.

Machine learning results I would argue hinders the people on the right side of the bell curve since midwits will think it's better than everyone and it becomes hard to argue against what Grok/ChatGPT etc are telling them (it's pretty much "the experts" now). Midwits do not understand machine learning results are really just a synthesizing an average anwser of it's data (or an "interpolated trend" of the data), and the average data is made by... average people, the midwits.

You can of course select data so it becomes "average data of the best data", but this only works in fields were people have manually already brute forced problems manually so it's an extreme abundance of material for them to train on, example; books, images, music and videos, all fields now have AI tools. But now these retards think that machine learning algorithms can be applied to anything, even research. If it's real research, this research is always lacking data! The companies are now realizing that no matter the models, they didn't get anywhere since the people creating the data needed are either gone or got replaced by midwits making the data pozzed.
 
Anybody who's worked with AI knows this to be true. Unfortunately you really can't understate the incompetency of middle/upper management and private equity.

They will still waste billions on AI simply because everybody else is, overloading the power grid in the meantime. They will continue to decimate entry level jobs with retarded chatbots just as they have with dead-end make-work positions, master's degrees, Chinese, Indians, etc.

They'll also enforce energy poverty, but YOU will purchase the $80,000 electric car and have YOUR gas stove confiscated because YOUR carbon footprint is so big. You will keep buying their slop because they killed all the competition.

The corporate world can't die quick enough.
 
The fact that this guy uses a chess match as his benchmark for reasoning says a lot and none of it good.

The people who want AI to be terrible are as delusional as the people who want it to be a miracle-genie. Makes me wonder if there was similar 'it's actually awful'/'it's the best thing ever' hype around inventions during the industrial revolution.

People are retarded.
 
I can see AI being a good resource for writers and artists who want some ideas to build off of, or for people too cheap to buy custom-made internet porn and who just want to produce their own. Anyone else is probably just going to use it for awhile then abandon it when it doesn't poop unicorn gold the way they thought it would.
I do tabletop stuff and Ai art gen has been a godsend. No longer do I have to scour search engines or art sites for pictures that look "good enough", I can just put in the parameters and generate a few until I find something I like.
 
You don't need to go that far back in time, just read up on some of the retardation going around during the dot-com bubble.
Or even the still-ongoing crypto/NFT speculation that envisioned a future where the data passing through nodes on the internet would be commoditized and then sold off as debt obligations to generate actual capital...... Fortunately the actual economies of the world imploded due to general mishandling before they could get too entwined with that poisonous idea.
 
I think AI has plenty of uses, it's just not the silver-bullet miracle machine people puff it up as. But just because it's not literal god-software doesn't mean it's worthless. I've had quite a lot of use from just ChatGPT in diagnosing and solving problems I've had with various hobby related projects, notably 3D printing. When you already have a good idea of what you're trying to accomplish and just need assistance with an aspect of it, AI works fantastically as a "talking manual". For example- I was trialling a pale resin and found blobby misprints on the underside of one part despite having dialled in the exposure pretty well. Consulted ChatGPT and it explained the exposure was fine, it was the supports and printing angle causing the issue (resin was pooling around dense supports and light bleed was causing it to get accidentally overcured). I could have spent hours trying to troubleshoot the problem but I now have not only the solution, but also the knowledge to avoid the problem in the first place.

Likewise, image AI is just very advanced procedural generation, something that's been a thing for a long, long time (and is often used in video games). And on the note of vidya I'd definitely make the argument AI image generation is extremely well-suited to making textures, especially the kind of repetitive, omnipresent things you don't really notice and are boring to create manually but are vitally important; environmental textures, for instance. If I'm making a game and need a particular reddish-brown stone texture, I can either find one that already exists (assuming there is a suitable one and it's not locked behind a paywall), make it myself from scratch (very tedious and surprisingly difficult)...or I can get an AI to generate the texture I want in a few seconds.

As for coding, whilst I don't know much about coding or programming in general, I think that whilst purely AI-generated code is a terrible idea, using AI to troubleshoot code is a very good one. An AI trained specifically to help programmers create and debug their own code would actually be pretty useful IMO, especially for novices who are genuinely interested in learning to code but just don't have the experience to understand what can be a pretty daunting and confusing skillset to master.

My personal hope is that when the bubble bursts, the AI market stabilizes rather than collapsing completely and we get actually useful tools out of it that can actually attract users based on their merits and not just because "it's AI!". My fear is that we go too far the other way and AI becomes such a dirty word that anything remotely resembling context-sensitive procedural generation gets trashed regardless of how good it might be.
 
I think AI has plenty of uses, it's just not the silver-bullet miracle machine people puff it up as.
I agree, but, the problem is? All the damage that's being lined up right now and will all come to pass in the next decade as people are killed by AI doctors making up cures, bankrupted by AI investors making up financial plans and put under suspicion by AI police labeling them threats because of the same greed and midwittery that caused the .com bust. Overpromising and selling raw unproven tech as the proven future, but TODAY! that gets gobbled up by greedy execs who can only see the dollar signs- except nobody died from Kozmo.com or Pets.com going belly-up. Just the investors.

At least some people are going to get hurt by giving AI the keys to the car.

That's what I worry about, the market will stabilize, but, predicable excesses everyone is warning about right now? Are STILL going to happen, and some of them will be fatal mistakes.
 
Last edited:
I’m amazed at the reality gap between what I e found generative ai useful for and what my senior management thinks it’s useful for. I am encouraged to ‘use it on every aspect!’ Of my job buy it’s just only useful for a very few things.
I keep asking for examples of how people are using it and get none. It’s great for a very few aspects and useless for day to day tasks
 
I agree, but, the problem is? All the damage that's being lined up right now and will all come to pass in the next decade as people are killed by AI doctors making up cures, rendered insolvent by AI investors making up financial plans and put under suspicion by AI police labeling them threats because of the same greed and midwittery that caused the .com bust. Overpromising and selling raw unproven tech as the proven future- except nobody died from Kozmo.com or Pets.com going belly-up.

At least some people are going to get hurt by giving AI the keys to the car.

That's what I worry about, the market will stabilize, but, predicable excesses everyone is warning about right now? Are STILL going to happen.
Don't worry no profession that gets easily sued is going to try outsourcing all their work to AI.
 
Back