Opinion The End of Software - "Majoring in computer science today will be like majoring in journalism in the late 90’s"

Link 1 | Link 2 | Archive 1 | Archive 2

To understand how software will change, we can benefit from studying how technology has changed other industries. History tends to rhyme, if you listen.

Before the internet, media behaved very differently—it was expensive to create. You had to pay people to make content, edit it, and distribute it. Because content was expensive to create, it had to make money. And consumers paid—newspapers, magazines, books, cable, and pay per view. Warren Buffett famously loved newspapers—and who wouldn’t love a predictable subscription business with local monopolistic dynamics?

When the internet happened, media companies viewed it as a way to reach broader audiences and reduce their distribution costs. But what no one saw coming was that the internet not only reduced distribution costs to zero, but it also drove the cost of creating content to zero. User generated content flourished, and when content doesn’t cost anything to create, it no longer has to make money. How does content behave when it no longer has to make money? The relaxation of this economic constraint led to a Cambrian explosion–you can take a picture of a cup of coffee, post it to a million views or none at all and the market clearing price is still met. This produced a deluge of content that none of us could reasonably consume. This necessitated products to direct attention, merchandise this content, and route us effectively–we understand these now as user-generated content platforms.

These platforms completely T-boned media companies. As a media company, you were competing for the same attention of users, but with a strictly higher COGS. The more people you had on your payroll that were creating content, the more exposed you were to being flanked by user-generated content platforms. Structurally, investing in media has been a losing value proposition ever since and value creation has shifted entirely to the platforms that control distribution.

Software is expensive to create. You have to pay people to create it, maintain it, and distribute it. Because software is expensive to create, it has to make money. And we pay for it–software licenses, SaaS, per seat pricing, etc. Software margins have historically been an architectural envy–90+% margins and zero marginal cost of distribution.

Software is expensive because developers are expensive. They are skilled translators–they translate human language into computer language and vice-versa. LLMs have proven themselves to be remarkably efficient at this and will drive the cost of creating software to zero. What happens when software no longer has to make money? We will experience a Cambrian explosion of software, the same way we did with content.

Vogue wasn’t replaced by another fashion media company, it was replaced by 10,000 influencers. Salesforce will not be replaced by another monolithic CRM. It will be replaced by a constellation of things that dynamically serve the same intent and pain points. Software companies will be replaced the same way media companies were, giving rise to a new set of platforms that control distribution.

SaaS, ARR, magic numbers–these are all shorthand to understand the old model of business building in software, one where the expense associated with creating software was a moat. The invisible hand has been stayed in software for a long time, but LLMs will usher in its swift, familiar corrective force. Majoring in computer science today will be like majoring in journalism in the late 90’s.
 
LLMs don't write code. They only interpolate and extrapolate from whatever in their training set got compressed into their model. Essentially they're just plagiarizing from Github and Stack Exchange and scrambling up the result a bit, with zero capability of verifying that the resulting code does what you want it to do (since they don't actually understand questions, they just convert questions to tokens and feed the tokens into their generator). It really is a Mechanical Pajeet.
 
LLMs don't write code. They only interpolate and extrapolate from whatever in their training set got compressed into their model. Essentially they're just plagiarizing from Github and Stack Exchange and scrambling up the result a bit, with zero capability of verifying that the resulting code does what you want it to do (since they don't actually understand questions, they just convert questions to tokens and feed the tokens into their generator). It really is a Mechanical Pajeet.

You aren't wrong but they will fix the verification problem somewhat by running the code portions of the response through a validator/compiler and if it doesn't validate they will just generate a new response (I believe this is already in the works for both GPT and Copilot). Also this is essentially what I do when I use it.... and god help people who trust GPT and aren't in a typed/compiled language. I can only imagine the kind of messes it would generate in PHP 5.0 or whatever.
 
  • Thunk-Provoking
Reactions: SIMIΔN
I'll ignore Free Software in this comment, but that's something else of which the author is clearly ignorant.

There are already systems in place to automate programming, and these are called metaprogramming systems. Believe me when I tell you that one man alone could replace the entire programming staffs of many large corporations. I believe Google employs over one hundred thousand people and, while most of them aren't programmers, even a company like Google could get away with far fewer than one thousand programmers in its employ.

Now, the question is why don't companies do this? As an acquaintance of mine put it, businesses don't want efficient employees, they want fungible employees. If a business employs a single Lisp programmer who maintains the entire infrastructure in tens of thousands rather than tens of millions of lines of code, he can't be fired willy-nilly and he can't be treated like shit. Businesses would much rather have one thousand employees they can treat like shit and fire on a whim.

Keep this in mind when reading future bullshit about programming employment.
Yep - also, there is not an executive in the world who is going to think "the computer fucked it up" is an acceptable answer to investors of why their product launch was broken to all shit, keeping programmers on if only to be scapegoats when shit goes awry is risk management. Imagine if a rogue ML trading algorithm eviscerated Wall Street due to poorly managed risk? The SEC is gonna crucify the person who made the decision to deploy that. Executives don't want to be that guy.
 
  • Like
Reactions: anustart76
The one thing LLMs do seem to be remarkably good at is plucking information out of unpredictably unstructured data and transforming it into neat, normalised returns. A project I'm working on uses an LLM in this way. It doesn't write code or do anything creative, it just replaces the dozen or so people we'd have previously had to hire to normalise data taken from public web crawls. In either case there would be the same level of verification required and it produces broadly similar results. Automating that sort of low-level drudgework seems to be where they will ultimately find their niche.

But not writing code. Not even boilerplate. I guarantee they will always fuck that up.
 
There are unsolvable problems here. You can't stop an LLM from "hallucinating."

Also, the reason they say "LLM" instead of "AI" is that it's becoming very clear that LLMs aren't actually intelligent. You probably wouldn't want to hire a person who acted like an LLM does.

They're not going to stop making up software packages. And because they can't stop making up packages, it creates fertile ground for the next generation of hackers to make those packages exist and contain malicious code. It's going to be a carnival for stupid development practices.

Hiring is starting to happen again.

One of the problems with LLM outputs is that you start to see the cracks in the procedural generation very early.

I was showing a good use case for it to my kids the other day, and used it to make a little bedtime fairy tale. We gave it some inputs and told it to make a story. It did that, the story was boring and had a very unmotivated shift to the ending but fine. Like the output of a talented 9 year old. Then, we told it to make a fairy tale with totally different inputs. Gave it a new general theme, new character types. 80% of what it came up with was exactly the same. It was a lot more impressive when it made the first one than when it made the second. Realizing it's a good Mad Libs machine disappointed the kids but it's going to disappoint some CEOs and VCs a lot more, and I will laugh.

Exactly. LLMs are cool once.... Then you figure out they're pretty much one trick ponies and fall apart when you ask for something it can't do.

Vogue is a more important voice than some dipshit snorting cinnamon on tiktok. The top 1% of influencers might have a rivaling voice but they often fall off rapidly because quite honestly they are just not that interesting once you see them enough. They are always cloying. They are desperate for views and attention because advertising is kind of shit for influencers. That is why Chase Bank still advertises with Vogue but Logan Paul does not have their logo tattooed on his face.

Also what top tier fashion events or any event for that matter are going to let those 10,000 influencers in? Who is going to give an interview to 10,000 influencers? Is every designer or important figure going to fire up a stream? No. They are going to talk to the top outlets and then the influencers are going to munch up that data and spit out their interpretation of that data.

Still I get what he is saying but I feel like he is not getting the scenario correct. Newspapers were diminished by radio, radio by tv, tv, magazines, and general sanity and decency by the Internet.
Correct. Most "influencers" are ultra small fry compared to the surviving magazines and old fashioned taste makers.

Only the BIGGEST (TOP 1-5%) influencers make big money and they usually got big via acting, porn, or similar.
 
Yep - also, there is not an executive in the world who is going to think "the computer fucked it up" is an acceptable answer to investors of why their product launch was broken to all shit, keeping programmers on if only to be scapegoats when shit goes awry is risk management. Imagine if a rogue ML trading algorithm eviscerated Wall Street due to poorly managed risk? The SEC is gonna crucify the person who made the decision to deploy that. Executives don't want to be that guy.
You haven't talked to many executives, have you?

Look at the state of gaming; can you name a modern, AAA game that is more than a MVP with 100 bugs?
 
My experience of using LLM today:
Q: I have a problem, I can do A or B, but bother are non optimal. Is there a C?
A: You could do A.
Q: But A is not optimal.
A: You're right, but you can do B.
Q: But B is not optimal.
A: You're right, but you can do A.

Downgrading the model actually gave me a decent solution but I had to implement it in a way that didn't suck balls.
 
My experience of using LLM today:
Q: I have a problem, I can do A or B, but bother are non optimal. Is there a C?
A: You could do A.
Q: But A is not optimal.
A: You're right, but you can do B.
Q: But B is not optimal.
A: You're right, but you can do A.
So what you are saying is that AI is already better than the typical helpdesk`?
 
After 20 years of software development, I'm thinking about finding a different job. It was fun while it lasted but its been a rough year and all the recruiters are saving software hiring is down 60+% this year
H-1Bs are legal slavery destroying the job market for CS. Why would you pay a domestic software engineer a fair wage when you can import (shit) labor for pennies on the dollar.
And yea my last job was for a huge corporation, my immediate bosses were all indians, they even hired a fucking Canadian as a junior on my team. If we keep work in this country it would be different but why pay me a 150k a year and give me all kinds of benefits when you can get some Indian to do it for half that? I have been working on getting a worker visa for a country like Japan lately so I can get out of America for awhile. All these fucking foreigners maybe Ill feel different if Im a foreigner
 
I've tried using AI to write code/configs. Every single response it's given has had at least one major problem, and in some cases been completely wrong in almost every word.

I ain't too worried.

They'll keep software Devs for anything they actually depend on, they'll just use AI for writing anything you use, and if it has a bug? Welcome to our AI chatbot which won't fix it either. You want to quit our service? Ok our competition is at xyz.com and they do the exact same thing.

This is why everything is turning into a subscription. The payment systems will be designed by actual software engineers, but the end product will be AI, paid by subscription.


"AHH but it will all come crashing down when the errors build up!"
Yep, but that's 15 financial quarters down the line and anyone involved with it will have cashed in and fucked off by then. And your everyday customer will always choose cheaper jam today
 
You haven't talked to many executives, have you?

Look at the state of gaming; can you name a modern, AAA game that is more than a MVP with 100 bugs?
They don't need anybody to blame in gaming, unless they miss sales numbers. 100 bugs, 1000000 bugs doesn't matter so long as the gamers buy that shit up. Look at the state of the Madden games, it's a garbage series that I haven't actually heard anything good about for years. The whole thing is pretty much just a vehicle for selling RNG packs as a "competitive" multiplayer mode. Despite that, every year football fans go and buy it since it scratches that itch, so investors are pretty happy overall.
 
So what you are saying is that AI is already better than the typical helpdesk`?
yeah it kicks ass at that, low value jobs like call centers and front offices are done for

prepare for a hell of LLM driven first-line support for basically everything. makes a lot of money tho!
 
  • Horrifying
Reactions: anustart76
Even stuff like aeronautical engineering is going towards assorted mudpeople now because the number must always go up. I would never board a jet airliner powered by anything other than Rolls-Royce, GE or CFM engines but in 20 years time we might see subhuman designed and built critical parts. Boeing deathtraps are already falling apart because of cost-cutting and they will face no real consequences for it.
Yet it's Boeing that's falling apart now while the so called mudpeople send missions to space for less than the cost of a Hollywood movie. The people hired need to have some skin in the game, regardless of the color of that skin.
 
The one thing LLMs do seem to be remarkably good at is plucking information out of unpredictably unstructured data and transforming it into neat, normalised returns. A project I'm working on uses an LLM in this way. It doesn't write code or do anything creative, it just replaces the dozen or so people we'd have previously had to hire to normalise data taken from public web crawls. In either case there would be the same level of verification required and it produces broadly similar results. Automating that sort of low-level drudgework seems to be where they will ultimately find their niche.

But not writing code. Not even boilerplate. I guarantee they will always fuck that up.
This has me thunkening- can/will/are LLMs making inroads on the endless analysis jobs? Data analysis/quantitative analysis/information technology analysis/data processor, all that type of stuff? I haven't heard of it so far, but then pajeets don't seem to have made inroads in those areas either, which puzzles me.
 
Last edited:
Back