Business AI is effectively ‘useless’—and it’s created a ‘fake it till you make it’ bubble that could end in disaster, veteran market watcher warns


1720583496990.png


There’s no avoiding the hype surrounding AI these days. Promises of new developments like personal robot assistants and miracle cancer cures are ubiquitous as executives take every opportunity to emphasize their AI chops to enthusiastic investors—and slightly less enthusiastic consumers.

Not everyone has been blown away by the AI fanfare, however. James Ferguson, founding partner of the UK-based macroeconomic research firm MacroStrategy Partnership, fears investors’ AI exuberance has created a concentrated market bubble that’s reminiscent of the dot-com era.

“These historically end badly,” Ferguson told Bloomberg's Merryn Somerset Webb in the latest episode of the Merryn Talks Money podcast. “So anyone who's sort of a bit long in the tooth and has seen this sort of thing before is tempted to believe it'll end badly.”

The veteran analyst argued that hallucinations—large language models’ (LLMs) tendency to invent facts, sources, and more—may prove a more intractable problem than initially anticipated, leading AI to have far fewer viable applications.

“AI still remains, I would argue, completely unproven. And fake it till you make it may work in Silicon Valley, but for the rest of us, I think once bitten twice shy may be more appropriate for AI,” he said. “If AI cannot be trusted…then AI is effectively, in my mind, useless.”

Ferguson also noted AI may end up being too “energy hungry” to be a cost effective tool for many businesses. To his point, a recent study from the Amsterdam School of Business and Economics found that AI applications alone could use as much power as the Netherlands by 2027.

“Forget Nvidia charging more and more and more for its chips, you also have to pay more and more and more to run those chips on your servers. And therefore you end up with something that is very expensive and has yet to prove anywhere really, outside of some narrow applications, that it’s paying for this,” he said.

For investors, particularly those leaning into the AI enthusiasm, Ferguson warned that the excessive tech hype based on questionable promises is very similar to the period before the dot-com crash. He noted that during both of these periods, market returns were concentrated in tech stocks that traded based on Wall Street’s sky-high earnings growth estimates.

But despite those lofty forecasts, the dominant hardware giants of the dot-com era, Cisco and Intel, have largely disappointed investors ever since. Ferguson argued today’s AI hardware hero, Nvidia, might experience a similar fate, particularly given its elevated valuation.

“What multiple of sales is Nvidia a good deal on if you think that it might only have—no matter how stratospheric the growth rate at the moment—if you think that it's probably not going to be a player in a decade's time?” he asked, implying Nvidia might not be worth the current price tag of nearly 40 times sales investors are paying.

Despite his argument that AI-linked tech stocks like Nvidia are highly overvalued, Ferguson admitted that no one can predict when a bubble will end. This dynamic leads many bearish investors to feel “compelled to play” in the markets even when stocks look pricey, according to the analyst—and that’s a great way to get hurt.

“I mean, it's certainly what was happening in the dotcom [bubble], for example, where almost anybody who wasn't a retail punter was looking at these things and saying, 'well, it can't last, but having said that, if it lasts one more quarter and I'm not playing, I'll lose my job,'” he explained.

The good news, according to Ferguson, is that because the current stock market bubble is so concentrated in AI-linked stocks, there is still value out there.

Of course, there will be widespread pain for investors if the AI bubble bursts. But after that, Ferguson recommended looking at the currently unloved U.S. small-cap stocks, which may benefit from interest rate cuts and aren’t highly valued.

“There's a lot of value to be found in the U.S. The trouble is that that value is to be found in good old fashioned ways, trawling through small caps and looking for businesses that are growing in a good old fashioned, steady way,” he said.

This story was originally featured on Fortune.com




In other news, water is still wet.
 
I say functional technological capability is directly tied to the DEI/woke competency crisis. Interplanetary travel is also theoretically possible with our current level of technology.
Hm, fair enough considering they're the gatekeepers of technology.
 
  • Feels
Reactions: LurkTrawl
I only mention my self education in investing because it’s useful in seeing the patterns repeating themselves over and over, and ironically interested amateurs have an advantage over professionals in a couple of ways:

  1. When a market downturn/crash/bloodbath happens, a lot of the people who gain the real life hard experience in spotting scams and misallocation get fired/go broke. So institutional knowledge gets lost and in a lot of ways the 20-40 year olds currently working in finance are unskilled labour.
  2. Amateurs aren’t required to report quarterly and are thus subject to short term market fluctuations. I’ve made a lot of my money by doing nothing and riding out situations that professionals had to try complicated maneuvers and get themselves fucked by timing.

So: a situation I am thinking of from the dotcom bubble is 360 Networks. They correctly identified in the late 90s that we were going to need a lot of broadband for all the internet that was coming. They spent billions of dollars overbuilding capacity but the subscribers weren’t there when they needed to pay their bills and they went bust. Long term: correct assessment of technology needs + short term: not enough revenue to justify capex = bankruptcy. Eventually the fibre got used, just not in time for the investors.

All you need to do is look at where it was 3-5 years ago and where it is now, and realize it's rapidly advancing. Even the technologically illiterate such as myself can tell, no wonder these computer nerds sense a gold rush. It's only people who haven't even looked into the subject or tested it themselves who are doubtful.
Who is going to make the money from this? Where are the trillions of dollars coming from that are going to build out these data centers?

The assumption is that it is coming from existing companies desperate to upgrade their technology to incorporate it into their operations.

So imagine Coca Cola needs to somehow incorporate AI into their production facilities. They spend billions of dollars into implementing it. What problem is actually being solved? Is it worth the spend? Extrapolate that across the economy. All the money to build up AI has to come from somewhere. And where will the profits go? Not Coca Cola by the looks of things at the moment.

I haven’t seen AI be cost effective or trustworthy against pajeets yet, the lowest bar imaginable. It may get there but timing is everything, especially when discussing the money.

In previous technology revolutions, it was obvious what the benefits were- the early adopters of steam engines, trains, oil, and combustion engines easily recouped their investments and obviously outperformed horses or whale oil or sailboats and took over the world. In fact, the old tech became niche or disappeared within a few years.

What is this application for AI? Something dramatic enough to justify the insane capital expenditures they are discussing. “Because someone else will do it first” has historically been the best way to lose a bunch of money.

Look into the Hershey’s SAP implementation to see how these fucking nerds are not as smart as they think they are. (A)

We have jittery, nervous markets desperate to avoid a necessary market correction and seeing a possible Hail Mary that they can sell to stupid executives who never want to be left behind on trends (hello The Cloud, Web 3, and Big Data) that never seem to quite deliver on the hype.
 
Last edited:
It's all Emperor's New Clothes syndrome. People act like these technologies are simply too hard for any normie to understand and very sophisticated, and anyone who rejects them is a dumbdumb. So investors don't want to admit they don't get it or have reservations, because that might reveal them to be stupid by people who are smart enough to get it.

Insecurity plus the Abilene Paradox explains 99% of the hype machine on LLMs.
 
You couldn't hold a believable conversation with AI 5 years ago, now it's close to passing the Turing test. It couldn't make decent music like it can now, have you tried it? There's even AI video coming along well from what I've seen.
Have I tried it? Yes, I have been running LLMs and Stable Diffusion locally for a long time. If you follow threads on this in the Technical forums you'll probably have seen me posting suggestions and advice to people on how to do so themselves. I know something about this so please avoid the whole "have you tried it?" tone.

They are not "close to passing the Turing test". Point me at the AI chat system of your choice and I'll rule it out as a human in a few moments. And five years ago? So mid-2019? Yes, there were some very impressive chat systems even back then. My point about distinguishing between progression of technological boundaries and prevalence of technology is a valid one.

EDIT: I'm running the Nous Hermes 2 LLM on my local machine right now. I just asked it if it was close to passing the Turing test. It says no:
turing.png
 
Point me at the AI chat system of your choice and I'll rule it out as a human in a few moments.
A blind test would be a lot better than you going in looking to confirm a predetermined opinion. Setting it up somehow so that a person isn't sure if their chat buddy is human or not by mixing them in with real people, then needing to pick out the bot would work better.

And like I said, I think we're getting close, not necessarily that we're there yet.
 
  • Agree
Reactions: SIMIΔN
A blind test would be a lot better than you going in looking to confirm a predetermined opinion. Setting it up somehow so that a person isn't sure if their chat buddy is human or not by mixing them in with real people, then needing to pick out the bot would work better.
Obviously a blind test would be better. So I tell you what, start feeding my replies to an AI of your choice and posting verbatim the responses here. We'll see how long it takes before anybody realises there's something off about your posts.
 
  • Winner
Reactions: LurkTrawl
start feeding my replies to an AI of your choice and posting verbatim the responses here. We'll see how long it takes before anybody realises there's something off about your posts.
That happened word-for-word with Chat GPT in another thread the other day, the guy doing it was called out almost immediately.

My actual response to the topic is that what we have isn't fucking AI, it's VI - virtual intelligence - from Mass Effect and it's fucking embarrassing seeing so many people act like it's not just a fucking robotic parrot. Hell an actual flesh-and-blood parrot has more going on in its head for fuck's sake.

My excitement or lack thereof regarding actual fucking Artificial Intelligence is under wraps until they can somehow conclusively prove that consciousness is a phenomenon that doesn't require wetware. To me, that's the first hurdle, if it's even possible for silicon to house a consciousness to begin with.
 
How do the hallucinations work? Where does it get the data from to produce them?
So say I ask for a report on how many chickens Bolivia produces per month. The LLM scours its database and gives me the answer based on it finding a few reports in there about chicken production and summarising them. I get that. But let’s say it just spazzes out and tells me nine chickens and that wrong completely. A hallucination.
Where does that come from? Is it picking up incorrect data from its database or is it making it up? If it’s making it up, how does it do that?
Apologies if this is a dumb question.

I think the true situation is a little worse than you are imagining. Unless the AI is designed to query a database for some sort of information to tie it to reality, what it is doing actually doesn't refer to any information about the world directly at all.

LLMs work by placing one word after another. Which word it picks to place after the current word is a complicated function of all the previous words that it placed.

word_n = LLM( {previous words} )

The training data is basically all the text on the internet, and what they do is adjust the weights in the model until it can replicate the word order in its training data. (Not exactly. It's "lossy compression", and that is also part of it's power. It won't regurgitate things exactly, but what it has learned about the vast gobs of text allows it to pick plausible continuations of passages that "look like" internet data.)

(In other words, it's all "hallucination", even the stuff that seems right, and it's only tied to reality because in the training data anywhere someone was writing about Bolivia and chickens in the same article, they included some factual information.)

The LLMs make many mistakes about the world (which are corrected by "RLHF - reinforcement learning with human feedback." - basically a lot of people applying corrections to the model based on interactions all the time.) (That they are mistakes is not inherent in the training text, but in what the text is *about*). There are things about the world that, in principle, you would expect an LLM (even if it was an actual conscious supergenius agent, which it is not) to never be able to naturally learn about our world. The LLMs have never "experienced" our world. They have only ever experienced text about our world. (Symbol grounding problem).
 
You would sort of expect a "true AI" to be working off of a model of the world itself. What objects are, what their properties are, how physics works, etc. It's what we do when we write text to other humans. It's what we mean when we expect words to mean things. The words are just a (painfully limited) medium to convey bits of our world model to someone else who can add it to their own world model.

LLMs do not have much of a world model. There are faint impressions of things that could be the barest beginnings of world models in restricted situations where you are having them play simple games, but that is not what's going on inside in most cases. When an LLM writes about Bolivian chickens, it doesn't know what Bolivia is, or what a chicken is. It knows that the word chicken and the word Bolivia occur next to all these other words in the word soup it was trained on. They write words not from a world model, but from a text model.

People complain about LLMs not being truthful (or complain about them writing words they don't like and recoil like vampires from truth they don't want to acknowledge.) But the LLM doesn't know what truth or falsehood *is* because it doesn't have a grounded model of the world it is writing about.
 
How do the hallucinations work? Where does it get the data from to produce them?
So say I ask for a report on how many chickens Bolivia produces per month. The LLM scours its database and gives me the answer based on it finding a few reports in there about chicken production and summarising them. I get that. But let’s say it just spazzes out and tells me nine chickens and that wrong completely. A hallucination.
Where does that come from? Is it picking up incorrect data from its database or is it making it up? If it’s making it up, how does it do that?
Apologies if this is a dumb question.

Purpose built machine learning applications are very focused on curating and normalizing the data itself in part to avoid these problems. If I am building a purse detector all the purses are going to be from the same angles, against the same background, converted to silhouettes... and when it gives results it's going to show you an accuracy score so you know when it is wildly guessing,

As for GPT style chat bots however, they get all their behavior from some instructional math interpreters and their data from the internet in general. They are fed weighted lists of internet scrapings as input, so essentially GPT is only as smart as the average redditor, or the last thing it read on stack overflow or wikipedia. Its taking things people have written, shredding them up, mashing them together and running it though a sentiment layer to make sure it doesn't say anything too mean or interesting. Also it's not trained in real time, the data set is updated periodically and the bot is reconstituted based on this, so the LLM is not going to know about current events.

So in short language models regurgitate shit they have read on the internet and when the internet is wrong, confused or ill informed, so is the AI (potentially)... and the more obscure the topic the more potential for said confusion. So when you ask about 'Bolivian chicken production' and all it finds is an article about chicken imports in bolivars you are going to get some fucked up nonsense.
 
Last edited:
The problem with AI is you shouldn't include AI generated data to train AI as the recursion eventually causes model collapse. Now that people are passing off AI generated content as real human generated content, the web scrapers will incorporate that into the next generation of data for training the AI.

Search for a topic on the web and very frequently the top results will be an article on a content farm that is a wall of text that seemingly answers your question, but reads like 5 different drafts of the same article with same information restated multiple ways one right after the other, often with some details being completely contradicted in the next iteration. The article is obviously AI generated, but given a fictional author. All this nonsense is going to get scraped and used to train the AI used in an article from another content farm. The cycle repeats as the recursion continues.

If AI output contained some sort of mandatory fingerprint which would automatically block it from inclusion in models the cycle of recursion would be broken. In practice this will never happen as those who use AI to ghostwrite are intending to deceive. The key feature of AI is people will accept very low quality and inaccurate content as long as it is nearly free and you don't even have to pay Bangalore grade wages to get it. Such shit tier AI generated content is already starting to take over the web.
 
But that kinda stuff is due to filters deliberately implemented by woke developers, not reflective of technological capacity.
Technology is only as functional as the creature using it intends for it to be. The same people who say things like "AI is the future" have typically been assailed on a spiritual level by more cunning individuals who recognize human beings have glaring psychological weaknesses. If you successfully convince normal people that a single word is the embodiment of all evil, and they never use it, and the AI is trained based off this 'knowledge', then the technology will be useless when it comes to that word. The Nigger Gambit is a real phenomenon.
The problem with AI is you shouldn't include AI generated data to train AI as the recursion eventually causes model collapse. Now that people are passing off AI generated content as real human generated content, the web scrapers will incorporate that into the next generation of data for training the AI.
Unfortunately I don't believe this consolidation is accidental either. It benefits small pockets of manipulative people if art is functionally killed (by devaluing it through AI recursion) the same way socialist policy benefits them. Concentrate something enough and insist "this is how it should be" to enough uneducated or unthinking people, who will reinforce that, and you've just conquered the world.
 
  • Like
Reactions: demicolon
The words are just a (painfully limited) medium to convey bits of our world model to someone else who can add it to their own world model.
I remember on one episode of Hubermann Lab he had on a german expert who had done some kind of test that conclusively proved that we all see colors slightly different, but close enough to the same shade that we all approximately know what we're talking about when we say "yellow" or "green".

I like bringing it up when I can because I think that's cool but also has a profound implication on just how differently the world is experienced by each person.
 
  • Informative
Reactions: demicolon
How do the hallucinations work? Where does it get the data from to produce them?
So say I ask for a report on how many chickens Bolivia produces per month. The LLM scours its database and gives me the answer based on it finding a few reports in there about chicken production and summarising them. I get that. But let’s say it just spazzes out and tells me nine chickens and that wrong completely. A hallucination.
Where does that come from? Is it picking up incorrect data from its database or is it making it up? If it’s making it up, how does it do that?
Apologies if this is a dumb question.
There's no database, it works kind of like human memory. Imagine a person who doesn't know Latin, but has been constantly shown question and answer pairs in Latin. After seeing millions of them (pretend he doesn't get annoyed or exhausted first), given a question in Latin, he can guess at what an answer might look like, while having no understanding of what any of that Latin actually means. He just guesses what the answer text should look like, given the questions and answers he's seen before.

Our current AI/ML works like this. It has no understanding of the meaning behind anything, it just gets trained on inputs and outputs. Then based solely on those input/output pairs, it guesses what an output should look like based on the new input. This is great for certain tasks, like art or something involving "fuzzy matching" because the meaning doesn't have to matter, it just has to go "based on what you gave me and what I've seen before, you should get something like this." This is why it's great for certain tasks in the medical field: show it a photo of something, and based on all the healthy and unhealthy labeled photos of that body part it's seen, it can guess pretty accurately that a photo looks closer to healthy or to unhealthy, despite not knowing what those terms actually mean. It just knows "this photo looks more like the photos I saw before in the healthy category."

It's not great for conversations because conversations have to have an inherent meaning. You can spit out words, but to have a real conversation you have to have an understanding of what the concepts those words and sentence structures represent. The machine still doesn't have those.
 
Back