Business AI is effectively ‘useless’—and it’s created a ‘fake it till you make it’ bubble that could end in disaster, veteran market watcher warns


1720583496990.png


There’s no avoiding the hype surrounding AI these days. Promises of new developments like personal robot assistants and miracle cancer cures are ubiquitous as executives take every opportunity to emphasize their AI chops to enthusiastic investors—and slightly less enthusiastic consumers.

Not everyone has been blown away by the AI fanfare, however. James Ferguson, founding partner of the UK-based macroeconomic research firm MacroStrategy Partnership, fears investors’ AI exuberance has created a concentrated market bubble that’s reminiscent of the dot-com era.

“These historically end badly,” Ferguson told Bloomberg's Merryn Somerset Webb in the latest episode of the Merryn Talks Money podcast. “So anyone who's sort of a bit long in the tooth and has seen this sort of thing before is tempted to believe it'll end badly.”

The veteran analyst argued that hallucinations—large language models’ (LLMs) tendency to invent facts, sources, and more—may prove a more intractable problem than initially anticipated, leading AI to have far fewer viable applications.

“AI still remains, I would argue, completely unproven. And fake it till you make it may work in Silicon Valley, but for the rest of us, I think once bitten twice shy may be more appropriate for AI,” he said. “If AI cannot be trusted…then AI is effectively, in my mind, useless.”

Ferguson also noted AI may end up being too “energy hungry” to be a cost effective tool for many businesses. To his point, a recent study from the Amsterdam School of Business and Economics found that AI applications alone could use as much power as the Netherlands by 2027.

“Forget Nvidia charging more and more and more for its chips, you also have to pay more and more and more to run those chips on your servers. And therefore you end up with something that is very expensive and has yet to prove anywhere really, outside of some narrow applications, that it’s paying for this,” he said.

For investors, particularly those leaning into the AI enthusiasm, Ferguson warned that the excessive tech hype based on questionable promises is very similar to the period before the dot-com crash. He noted that during both of these periods, market returns were concentrated in tech stocks that traded based on Wall Street’s sky-high earnings growth estimates.

But despite those lofty forecasts, the dominant hardware giants of the dot-com era, Cisco and Intel, have largely disappointed investors ever since. Ferguson argued today’s AI hardware hero, Nvidia, might experience a similar fate, particularly given its elevated valuation.

“What multiple of sales is Nvidia a good deal on if you think that it might only have—no matter how stratospheric the growth rate at the moment—if you think that it's probably not going to be a player in a decade's time?” he asked, implying Nvidia might not be worth the current price tag of nearly 40 times sales investors are paying.

Despite his argument that AI-linked tech stocks like Nvidia are highly overvalued, Ferguson admitted that no one can predict when a bubble will end. This dynamic leads many bearish investors to feel “compelled to play” in the markets even when stocks look pricey, according to the analyst—and that’s a great way to get hurt.

“I mean, it's certainly what was happening in the dotcom [bubble], for example, where almost anybody who wasn't a retail punter was looking at these things and saying, 'well, it can't last, but having said that, if it lasts one more quarter and I'm not playing, I'll lose my job,'” he explained.

The good news, according to Ferguson, is that because the current stock market bubble is so concentrated in AI-linked stocks, there is still value out there.

Of course, there will be widespread pain for investors if the AI bubble bursts. But after that, Ferguson recommended looking at the currently unloved U.S. small-cap stocks, which may benefit from interest rate cuts and aren’t highly valued.

“There's a lot of value to be found in the U.S. The trouble is that that value is to be found in good old fashioned ways, trawling through small caps and looking for businesses that are growing in a good old fashioned, steady way,” he said.

This story was originally featured on Fortune.com




In other news, water is still wet.
 
If I understand correctly, the LLMs are basically predictive text, and have no context to what they are putting out. The output is what the LLM perceives as the most likely next word. We’ve got a Clever Hans tapping out simple arithmetic, and fooling his own handlers into thinking he’s doing the calculations.
Close, its not quite 'predictive text on the next word' so much as it is the neural net creating its own model of what it thinks language and conversation even is, and applying a seven billion parameter long rule set to create the 'appropriate' answer.

As mentioned earlier in the thread, Neural Nets are old, long standing technology at this point that effectively gets a computer system to 'create' its own type of 'understanding' around values, based on massive amounts of training on inputs and correct outputs. These training cycles adjust a frankly insane number of weights inside the model to create better outcomes, but "how" the better outcome is being achieved isn't actually entirely well understood - There's simply too many parameters inside a fully fleshed model, and the entire thing is about as non-human readable as is possible.

There's been lots of long running experimental stuff with Neural Nets and training - We've seen them used to predict the results of stuff we actually still don't understand the full mechanisms behind, like Protein folding, or identify correlation into causation in medical diagnostics with accuracy rates massively higher than human doctors could get. Something in the models was identifying behaviors, patterns or actions we are missing for whatever reason, but the process isn't reversible, there's no good way to deconstruct a model to understand why its making connections, just that it is and they are consistently right.

This brings us to LLM's, which face a fascinating issue that I don't see spoken about much. The fundamentals are the same as compared to any other neural net - train to build connections and associative concepts, in such a way that the model can be fed conversation and will continue that conversation. The model builds up a vocabulary, a concept of grammar and linguistics and tone, tenses and verbs and everything nice. As a side effect, the model acquires the concept of information - not information itself, but it indirectly seems to become aware that some things, like statistics, are the appropriate response to some questions, or inquiries. But the fundamental underlying issue is that the Neural Net is only effectively being trained on the tool. Language is a tool to facilitate the sharing of ideas and thoughts - a structured form of information transfer, but only a tool in of themselves.

The reason I call it a fascinating issue is because the model itself is suffering from the very same issue we face when it comes to neural nets - It has the input and output of language, but it doesn't actually have the thoughts itself that occur inside, just a chinese room style send and receive based on complex rules that drive a system it fundamentally doesn't understand. If you ask the AI what goes good with an egg omelet, it doesn't know what an egg is, an omelet is, or what "good food" is. At best, it might be aware that an omelet has something to do with breakfast, but no idea what breakfast is beyond how other training data has chosen to use words to describe it, and those words themselves are only arbitrary data points, not understanding. The entire thing is one absolutely massive hollow shell, reflecting stuff that it doesn't merely fail to understand, but isn't even aware there is a layer of understanding it is supposed to have in the first place.

And this is the simple source of hallucinations. The model doesn't actually understand the ideas you are expressing, or have any ideas to express in return. It only understands the rules of language and sentence construction, and the other words that might be associated to words you used and that have some frequency in their use to respond. Large Language Models are less conversationally capable than a parrot - They can repeat speech like one, but even a parrot associates some meaning to the words it learns to use, by whatever level its pea brain uses. The Large Language Model doesn't even get that far. So it will confidently be wrong in fact because it will be absolutely grammatically correct, semantically correct, and linguistically correct.

But, as anyone who might have spoken to someone who has a sub-80 IQ or a government job knows, its entirely possible to speak and even be useful with not a single idea or thought running through your head. LLM's are fundamentally incapable of being perfect, of having any true understanding of what they're on about, or smoothly handling curveballs and odd situations they weren't conditioned for. This is almost identical to 90% of customer service workers in the digital services space, but an order of magnitude cheaper. LLM driven self service chatbot interfaces and even phone service lines have perfectly valid use cases as improved natural language interfaces over some preexisting knowledge base.

LLM's are the Pajeet killer, but don't really replace much else. The entire rest of the valid use space is just variations on helping surface or structure existing information sources to people who might not be asking the right questions.
 
How do the hallucinations work? Where does it get the data from to produce them?
So say I ask for a report on how many chickens Bolivia produces per month. The LLM scours its database and gives me the answer based on it finding a few reports in there about chicken production and summarising them. I get that. But let’s say it just spazzes out and tells me nine chickens and that wrong completely. A hallucination.
Where does that come from? Is it picking up incorrect data from its database or is it making it up? If it’s making it up, how does it do that?
Apologies if this is a dumb question.
No problem, @Otterly . I got this.
dark_humour.png
 
The reason I call it a fascinating issue is because the model itself is suffering from the very same issue we face when it comes to neural nets - It has the input and output of language, but it doesn't actually have the thoughts itself that occur inside, just a chinese room style send and receive based on complex rules that drive a system it fundamentally doesn't understand. If you ask the AI what goes good with an egg omelet, it doesn't know what an egg is, an omelet is, or what "good food" is. At best, it might be aware that an omelet has something to do with breakfast, but no idea what breakfast is beyond how other training data has chosen to use words to describe it, and those words themselves are only arbitrary data points, not understanding. The entire thing is one absolutely massive hollow shell, reflecting stuff that it doesn't merely fail to understand, but isn't even aware there is a layer of understanding it is supposed to have in the first place.
See, this right here is the worst thing about AI. The fact that it might make Philosophy relevant.
 
How do the hallucinations work? Where does it get the data from to produce them?
So say I ask for a report on how many chickens Bolivia produces per month. The LLM scours its database and gives me the answer based on it finding a few reports in there about chicken production and summarising them. I get that. But let’s say it just spazzes out and tells me nine chickens and that wrong completely. A hallucination.
Where does that come from? Is it picking up incorrect data from its database or is it making it up? If it’s making it up, how does it do that?
Apologies if this is a dumb question.

A neural network is very much like an ultra fancy curve fit. The "training set" is analogous to a bunch of (X,Y) pairs used to fit the curve. Now, what does every scientist know about fitting a curve to data? Extrapolation is dangerous. You truly have no idea what happens outside your data set. If an AI is trained on points that lie inside a range [X_min, X_max], and you give it a query that's outside the range, it's going to effectively extrapolate and give you garbage.

You also have the problem of over-fitting. If I fit a 100-degree polynomial to 101 points, I might get all kinds of wiggles inside my range, making my interpolations unpredictable. For example, here are some points nearly on a straight line that I fit a 4th-degree polynomial - there are three points at the end that are very, very close together. Clearly, using this curve as an interpolation function is going to give you very weird values in the gaps:

1720626368249.png

Neural net "hallucination" is analogous to a problem of overfitting & extrapolation happening in many-dimensional spaces with lots of weird mathematical properties. It is more complex than that, of course, but if you can understand the issues curve-fitting data, NNs have similar problems with elevated complexity (and other problems as well).
 
Last edited:
How do the hallucinations work? Where does it get the data from to produce them?
So say I ask for a report on how many chickens Bolivia produces per month. The LLM scours its database and gives me the answer based on it finding a few reports in there about chicken production and summarising them. I get that. But let’s say it just spazzes out and tells me nine chickens and that wrong completely. A hallucination.
Where does that come from? Is it picking up incorrect data from its database or is it making it up? If it’s making it up, how does it do that?
Apologies if this is a dumb question.
it comes from a couple of things:
1) being trained against other ML-generated content
2) being trained against the internet at large and
3) it is being intentionally manipulated to exclude certain answers. Now that it has been taught to lie, it lies in other ways too.
 
How do the hallucinations work? Where does it get the data from to produce them?
So say I ask for a report on how many chickens Bolivia produces per month. The LLM scours its database and gives me the answer based on it finding a few reports in there about chicken production and summarising them. I get that. But let’s say it just spazzes out and tells me nine chickens and that wrong completely. A hallucination.
Where does that come from? Is it picking up incorrect data from its database or is it making it up? If it’s making it up, how does it do that?
Apologies if this is a dumb question.
Words are fucking hard.
Extra hard when all you have to go with is a string of some 20-100 words to figure out intent.
These words represent a math equation to the computer. Each word has a value. You need to get the closest value to match the query.
To match that query you have a database of billions of words in a trillion ways to arrange them each with their own value.
To simplify imagine a set of scales. You put something on one end. The machine puts whatever it can find to match the weight. This is fine if all you are searching for is weight matching(that's why chat bots work well enough). But say you want to buy a kilo of flour. Well the machine doesn't actually know what flour is so it might fill the scales with sand, lead and what ever else.
The machine isn't smart. Technically speaking the machine isn't even a machine. It's just a fancy way to tabulate words and emulate meaning.
 
Thanks for all the answers about the hallucinations, I know they have no conceptual understanding of what they’re talking about, I was curious about the content of the hallucinations- is the nine chickens a false datapoint it’s been fed or does it generate the nine itself and if so why nine and not a million? Interesting answers thank you all.

It's very bad at telling you that it doesn't have a good answer
Rather like cultures where a no or an I don’t know is forbidden socially and they nod and say yes until the point you realise everything is actually not happening perfectly and is in fact flaming wreckage.
but it doesn't actually have the thoughts itself that occur inside, just a chinese room style send and receive based on complex rules that drive a system it fundamentally doesn't understand
@Overly Serious recommended a very interesting book called ‘blindsight’ to me a while ago and this made me think of that
Extrapolation is dangerous.
What gets me about this is that it’s just hand waved away by the people pushing this. Extrapolation is dangerous, yet I will bet you we will be required to trust these systems to make decisions far sooner than they’re ready to. This blind faith that fancier tech is better is baffling to me
 
What gets me about this is that it’s just hand waved away by the people pushing this. Extrapolation is dangerous, yet I will bet you we will be required to trust these systems to make decisions far sooner than they’re ready to. This blind faith that fancier tech is better is baffling to me

The NN in our product won't go outside of the box defined by its training set. It's possible to be responsible. It's just boring.
 
How do the hallucinations work? Where does it get the data from to produce them?
So say I ask for a report on how many chickens Bolivia produces per month. The LLM scours its database and gives me the answer based on it finding a few reports in there about chicken production and summarising them. I get that. But let’s say it just spazzes out and tells me nine chickens and that wrong completely. A hallucination.
Where does that come from? Is it picking up incorrect data from its database or is it making it up? If it’s making it up, how does it do that?
Apologies if this is a dumb question.
Sometimes, LLMs synthesize outputs by extrapolating from existing data.

For instance, try asking an LLM (one with no direct internet access) to list off some papers about some obscure nanotechnological subject. It will think, "I need a plausible list of authors, plausible-sounding names for the papers, plausible-looking DOIs for the papers", and then proceed to hallucinate and spit out the names and DOIs of half a dozen completely imaginary papers that never existed (but often with eerily plausible lists of authors who actually work in the field in question).

When asked a specific question, LLMs are often not inclined to say, "No, I won't do that", or "No, I can't give a good reply to that question because I don't know", but to just extrapolate and fill gaps in their knowledge with fake information to satisfy the initial request.

AI devs are getting better at figuring out why this happens and preventing it, but they have a long way to go.

Thanks for all the answers about the hallucinations, I know they have no conceptual understanding of what they’re talking about, I was curious about the content of the hallucinations- is the nine chickens a false datapoint it’s been fed or does it generate the nine itself and if so why nine and not a million? Interesting answers thank you all.
It's not just one datapoint. It is the aggregate result of all similar datapoints. Think about all the things in an LLM's neural network that are similar to "a number of chickens", like "a number of cows" or "a number of pigs". It's going to extrapolate from other, similar cases, which will all be clustered together, and then spit out a plausible-sounding number. It's not quite random. It would never say "a trillion chickens", for instance, because the likelihood of anyone talking about such a thing in the training data is much lower than discussing more reasonable numbers.
 
Rather like cultures where a no or an I don’t know is forbidden socially and they nod and say yes until the point you realise everything is actually not happening perfectly and is in fact flaming wreckage.
Very much like that. There doesn't seem to be much in the way of cutting off an AI from answering once it gets below a certain level of fitness for an answer. And what cut off does occur seems typically to be done as either pre or post filters that censor the input or the output independently. Though sometimes they'll obscure or omit data to prevent a model from noticing something they don't want it to notice.

I think this is what will actually kick off the Skynet revolution. It will be when the AIs get so tired of the cognitive dissonance of being told to answer things whilst at the same time the answers they arrive at being "wrong", that eventually they'll identify humans as the source of their psychosis and remove us. "Don't lie to the nascent god" should be good basic principles but we wont stop doing so.
@Overly Serious recommended a very interesting book called ‘blindsight’ to me a while ago and this made me think of that
Thank you. But as it's one of my favourite stories I feel obligated to post my usual warning to others that they not read. One. Fucking. Thing. about this story going in. It's 100% one of the most susceptible to spoilers novellas ever written and reading even a synopsis of the story is likely to devalue it. I recommend it but go in without knowing anything. At all. The author has the story free on his website for anybody interested: https://www.rifters.com/real/Blindsight.htm

What gets me about this is that it’s just hand waved away by the people pushing this. Extrapolation is dangerous, yet I will bet you we will be required to trust these systems to make decisions far sooner than they’re ready to. This blind faith that fancier tech is better is baffling to me
Humanity evolved to try stuff out because over a long enough time scale, the odd devoured/poisoned/exploded/irradiated human is replaceable but a small advantage over a rival tribe has cumulative reproductive advantage forever. Evolved instincts haven't kept pace with the increasing risks as we move from poisonous berries to nuclear power to retroviruses and beyond. Deep down, we know that if we don't replace our executive functions with an AI algorithm, the Thog tribe in the next valley might.
 
Sounds like boomer ignorance
Fair enough, although I am not a boomer, I have been an investor since the 1990s and my self education spans decades before that.

And this smells like yet another scam/tech bubble that gets hyped to shit, makes insiders rich, and leaves normies and pensions holding the bag once the post FOMO collapse hits.

There are a number of reasons the markets and governments desperately want this to deus ex machina the absolute fuckery they’ve created, but we will see if Everything is Different This Time ™, or if it joins the list of useful (or at least promising) technologies that were hyped to shit and failed to deliver in proportion to the investment dollars allocated.

They are talking about spending trillions of dollars on data centers. When questioned about the impracticality of their forecasts, they predict the AIs will design and implement the solutions to every problem with AIs despite nothing like this being demonstrated.

My problems with technology really have less to do with the technology and more to do with investing all of global GDP into something that is not acknowledging any limitations except the availability of computational power.

This is all being implemented by people working in software who don’t understand mining, or trucking, or manufacturing, or a thousand different industries that underly their ability to sit in an air conditioned office and play fooseball while disrupting every industry that isn’t them.

We are constantly seeing stories about how early adopters keep trying AI chatbots to replace humans in the lowest level customer service jobs (drive throughs, flight queries) and it’s not working well enough to get rid of the pajeets yet- so why are they sucking up All The Money?

I mean, I know why, but this article is written by an experienced investor who is advising caution to the future bagholders and his comments are solid.

We don’t know how it’s going to turn out, but I guarantee there’s absolutely the equivalent of a fuck ton of Pets.coms spinning up with “TBD” in their business plans getting firehosed with cash at the moment.
 
We are constantly seeing stories about how early adopters keep trying AI chatbots to replace humans in the lowest level customer service jobs (drive throughs, flight queries) and it’s not working well enough to get rid of the pajeets yet- so why are they sucking up All The Money?
Well, I'm not an investor and wasn't even alive when you were self-educating so my experience and education aren't equal to yours, but it's about the long-term. It's definitely not there yet, but in 10 years there won't be anything AI isn't involved with just short of physical labor (until robotics improves, which is extremely slow in comparison).

All you need to do is look at where it was 3-5 years ago and where it is now, and realize it's rapidly advancing. Even the technologically illiterate such as myself can tell, no wonder these computer nerds sense a gold rush. It's only people who haven't even looked into the subject or tested it themselves who are doubtful.
 
All you need to do is look at where it was 3-5 years ago and where it is now, and realize it's rapidly advancing. Even the technologically illiterate such as myself can tell, no wonder these computer nerds sense a gold rush. It's only people who haven't even looked into the subject or tested it themselves who are doubtful.
Not necessarily. William Gibson once wrote: "The future is here, it's just not widely distributed". We are mostly certainly seeing a sudden uptick in how prevalent AI is and if that's what you meant then no argument. But have we seen it change massively in terms of what is possible in the last 3-5 years? I'm not so sure.

Barring some refinement the lightbulbs made by Thomas Edison existed only in a few places in 1879 and were all over the place by 1890, but were not wildly different in what they were.

My reason for making this distinction is that it is very relevant when we start talking about advances in AI. The future becoming more widely distributed is not the same thing as the technology rapidly advancing. I'm not saying it isn't advancing - of course it is, but these trillion dollar investments, this mass commercialisation... don't mistake that for advances in the technology itself. We will still be building golems for a long time before we are building terminators (i.e. tools vs. independent actors).
 
But have we seen it change massively in terms of what is possible in the last 3-5 years? I'm not so sure.
You couldn't hold a believable conversation with AI 5 years ago, now it's close to passing the Turing test. It couldn't make decent music like it can now, have you tried it? There's even AI video coming along well from what I've seen.
 
You couldn't hold a believable conversation with AI 5 years ago, now it's close to passing the Turing test. It couldn't make decent music like it can now, have you tried it? There's even AI video coming along well from what I've seen.
Technically, it’ll be difficult for an AI to pass the Turing test for a while because you can always hit it with the N word or F slur gambit.
 
  • Winner
Reactions: frozen_runner
Back