Culture ‘Everybody’s Replaceable’: The New Ways Bosses Talk About Workers - Step it up, stop complaining—and make way for AI. CEOs are no longer lauding employees as the talent.


Illustration: Daisy Korpics/WSJ, iStock, Pixelsquid

By Chip Cutter
May 11, 2025 11:00 pm ET

Corporate America’s long-running war for talent sounds more like a war on the talent these days.

Not long ago, bosses routinely praised workers as their most prized asset, so much that some hoarded new hires before having enough for them to do. Today, with a giant question mark hanging over the economy, executives are pulling no punches in saying employees need to work harder, complain less and be glad they still have jobs.

“Work-life balance is your problem,” Emma Grede, co-founder of the shapewear company Skims and CEO of clothing label Good American, said this month. After recently cutting more than a 1,000 jobs, Starbucks CEO Brian Niccol said remaining corporate staff needed to step it up and “own whether or not this place grows.” JPMorgan CEO Jamie Dimon, in a profanity-laced internal meeting, told employees lamenting a return-to-work mandate that he didn’t care.

“I’ve had it with this kind of stuff,” he said. “I’ve been working seven days a week since Covid, and I come in, and—where is everybody else?”

The shift in tone marks a shift in power now that companies are shrinking their white-collar staff. With jobs harder to find, many workers are seeing perks disappear and their grievances ignored.

im-11595565.webp
JPMorgan CEO Jamie Dimon. Photo: Al Drago/Bloomberg News

The latest episode happened at a contentious all-hands at Uber last month. The company had just changed the requirements to get a monthlong paid sabbatical to eight years of working at the ride-hail giant, from five years. A decision to require people to work at least three instead of two days in the office also drew complaints. CEO Dara Khosrowshahi suggested those unhappy with the changes deal with it.

“We recognize some of these changes are going to be unpopular,” he said in comments originally reported by CNBC. “This is a risk we decided to take.”

How much more license do bosses have to talk tough to staff? Take the outrage in 2023 when the head of furniture company MillerKnoll told staffers worried about bonuses to “leave pity city.” That comment, made in a video call, immediately went viral, sparking days of headlines and worker backlash. CEO Andi Owen quickly apologized, and said her comments were insensitive.

After the Uber town hall, on the other hand, Chief People Officer Nikki Krishnamurthy issued a memo saying the company would speak with some staff for being disrespectful in voicing their displeasure.

Workers like Donnie Donselman, who recently worked for a technology-services firm, can sense the new power dynamic. As he applies for new tech jobs, the 47-year-old has noticed that many companies now want applicants to do so many tasks, a position is essentially “three jobs” in one.

im-76424449.webp
Uber CEO Dara Khosrowshahi recently said some of the changes implemented by the company will be unpopular with employees. Photo: Kent Nishimura/Bloomberg News

“They want it all,” he said.

In his job search, he tries to suss out the culture of a company because he has noticed the tough-talk language from CEOs and finds it worrisome. “All you’re doing is putting fear in people, and you’re not going to get good results from that,” said Donselman, who lives near Lexington, Ky.

Behind CEOs’ more brusque tone lies a disconnect between employees and executives, said Michael McCutcheon, an adjunct professor in applied psychology at New York University and an executive coach.

Some employees are operating like it is “still 2021,” when they could name their demands because of labor shortages and a surge in worker resignations, he said. Now bosses face a global trade war and sinking consumer confidence and feel they must ask more of employees to survive.

“This is a matter of pragmatism,” McCutcheon said.

President Trump and his billionaire adviser Elon Musk have helped set the more-aggressive tone in their bid to slash the federal workforce.

“Everybody’s replaceable,” as Trump put it shortly after the inauguration. Musk called his February demand that federal workers email what they accomplished that past week a “pulse check” to prove they did any work.

im-65081242.webp
Tobias Lütke told Shopify employees that the company won’t make new hires unless managers can prove AI isn’t capable of doing the job. Photo: Dustin Chambers/Bloomberg News

Advances in generative AI also play a role. Shopify CEO Tobi Lütke recently told employees that the e-commerce company won’t make new hires unless managers can prove AI isn’t capable of doing the job. Other business leaders are warning their staff to adopt more AI—or else.

“AI is coming for your jobs. Heck, it’s coming for my job too. This is a wake-up call,” Micha Kaufman, CEO of the freelance marketplace Fiverr, wrote in a staff memo last month. Those “who will not wake up and understand the new reality fast are, unfortunately, doomed.”

Employees will someday have their moment in the sun again, said Charles A. O’Reilly, a professor of management at Stanford.

“When the market turns around, and job opportunities are plentiful, then CEOs will start to talk more about how important employees are, and employees will take advantage of it, ” he said.

For now, though, some executives say fewer, not more corporate staff, will help them run more efficiently. On Thursday, Match Group, which runs dating apps Hinge and Tinder, became the latest company to say it planned to thin its managerial ranks in sweeping layoffs. About one in five managers will be cut, and Match’s CEO, Spencer Rascoff, told investors the company is stepping up efforts to cut costs and rewire the organization to focus on its products.

“We lit a fire under the team here,” Rascoff said.

Source (Archive)
 
Who will pay for the UBI? That's the question they never seem to answer.
There will still be people to tax that make income through jobs that AI can’t replace, but the cattle that subsisted on white collar computer work will be on the new age welfare. Plus, governments will happily pay for retards to stay retarded by paying them just enough to eat and watch goyslop. Look at welfare communities today to see this.
 
As someone who uses AI to assist with code, it's not very good at coding half the time. It's a good tool though.
It's basically just a glorified autocomplete or search function. It's trained on github and other massive code repositories. No shock that it can do basic shit in python correctly. I'd be more surprised if it couldn't. Most basic code problems are pretty formulaic. It's dealing with complex runtime logic and software design where AI will inevitably fail. It doesn't actually know anything. It's just responding to prompts with various supporting inputs and outputting what it predicts would be the most likely outcome if a human did it. This has its uses but it's terrible for optimizing anything. I've rewritten the crap code it spits out with my own algorithms and been amused by the speed difference. Don't get me started on complex GUI work or handling poorly maintained or obscure libraries. It would sooner hardcode values than look them up in a published dictionary.
 
The eventual endgame in this screwed up future is to be self-sufficient and not be too dependent on the state. Now if we are going with 2077 logic, they'll release a virus that fucks over chickens so the plebs are less upset about eating bugs in the future and to promote foul slop pretending to be food fittingly called "Scop".

In IRL terms though, they'll likely declare another bird pandemic and kill off the chickens and outlaw raising said birds. Just like Bongistan.
 
It's basically just a glorified autocomplete or search function. It's trained on github and other massive code repositories. No shock that it can do basic shit in python correctly. I'd be more surprised if it couldn't. Most basic code problems are pretty formulaic. It's dealing with complex runtime logic and software design where AI will inevitably fail. It doesn't actually know anything. It's just responding to prompts with various supporting inputs and outputting what it predicts would be the most likely outcome if a human did it. This has its uses but it's terrible for optimizing anything. I've rewritten the crap code it spits out with my own algorithms and been amused by the speed difference. Don't get me started on complex GUI work or handling poorly maintained or obscure libraries. It would sooner hardcode values than look them up in a published dictionary.
Exactly, people who are scared of losing their job to AI must not have a very intelligent job in the first place.
 
AI isn’t replacing anyone useful for a while in the main. It does hit some skilled people though
Even the things it is touted for it’s not great at. The places where it can just replace people completely are niche.
Example: medical and technical translation. Let’s fire everyone! Except no, now the quality is crap and dangerously so. It can do a lot of the bulk work but you still need someone to check it because it doesn’t handle any kind of idiom or complicated stuff well at all and especially medical things.
Example: you go for a mammogram. The receptionist is a human. She keeps her job. The lady actually taking the pictures needs to position you on the plates correctly and AI can’t do that because every body is different.
The girl cleaning the toilets and replenishing the coffee machine keeps her job. Who loses theirs? The skilled radiologist who reads the scans will - or a lot of them will because that is something the machine learning IS good at. They will keep enough on staff for a human second look and until liability issues are more worked out.
If you have a job that deals with anything complex and unpredictable, your job is fine. If you interact with real world objects like machinery or mops or humans that need injections you’re fine.
If your job can be automated, you may be in trouble.
 
The girl cleaning the toilets and replenishing the coffee machine keeps her job. Who loses theirs? The skilled radiologist who reads the scans will - or a lot of them will because that is something the machine learning IS good at. They will keep enough on staff for a human second look and until liability issues are more worked out.
If you have a job that deals with anything complex and unpredictable, your job is fine.
Do you see the contradiction here? Your post is essentially demonstrating that AI will replace useful people, those who are specialized in information jobs, which btw our entire economy was reoriented towards for the last 40 years.
 
There will still be people to tax that make income through jobs that AI can’t replace, but the cattle that subsisted on white collar computer work will be on the new age welfare. Plus, governments will happily pay for retards to stay retarded by paying them just enough to eat and watch goyslop. Look at welfare communities today to see this.
Governments will because those retards will keep voting them in power. Here's the problem: corporations are gearing up to take control, and they would sell their fucking mother if it made their quarterly profits better. Do you really think they will put up with all those welfare niggers when they take control?
 
My experience with AI programming is that it can do simple tasks and regurgitate some of the useful stuff like search engines used to decade ago. Anything beyond that makes it shit and piss itself. I've had LLMs steer me wrong on multiple occasions to the point where I stopped trusting them for all tasks except the simplest ones. I don't buy that it will suddenly get better, you could put a gun to my head and asks me about any improvements to LLM programming in the last year and I'd just tell you to pull the trigger.
We've had this stuff for like four years now and despite all the buzz there no one seems to be able to point to any quantifiable metric of improvement that could be chalked to implementing some AI-powered services. I refuse to believe that there's absolutely no one that would brag about this stuff nor that there isn't anyone who would write a couple of blogposts on how AI has transformed their business for the better. To me it feels like we're in a midpoint of the emperor's new clothes story.
 
  1. Americans who learned to code get replaced by Indians who “learned” how to code and will take slave wages.
  2. Indian coders get replaced by American AI experts that can use AI to do the work a sweatshop full of Indians used to do for the price of the hardware and electricity required to run it.
  3. Cloud-based AI gets so good that Indian AI experts are hired to replace the Americans again.
  4. AI becomes so effective at performing tasks on basic prompts that you no longer need AI experts to manage AI workload and all the Indians again lose their jobs with no replacement since the upper level management of the company can just tell AI what they want it to do.
Vibe coding via ML and Indians will ultimately end up with nearly indistinguishable results - unmaintainable codebases that can't merely be fixed by hiring some jeets or asking the AI to fix them. Right now, retarded execs are in full "line go up" delusion. This isn't even the first time it has happened, back in the 2000s a lot of companies made the retarded decision to outsource their IT to India for a short term boost to the bottom line. They learned the hard way that it doesn't work nearly as well as they hoped, and the survivors of that failed experiment are still paying for those mistakes.
The retarded execs of current year think they've cracked the code this time around - instead of offshoring everything, they'll hire a few jeets to work stateside and be the middleman for the offshore jeets! Of course, that doesn't resolve the underlying issue where the work quality is substandard, but they are hedging on their stateside jeets being loyal and competent enough to somehow wrangle the impending disaster into shape when the time comes. It's not going to happen.
The really retarded execs think ML is going to be a safety net for the looming consequences, but that's going to be the real kiss of death. ML isn't true AI, it's trained on existing datasets. Garbage in, garbage out. Jeet coders already heavily rely on ML as a crutch, and when so many companies are pulling the same shit it's all going to end up getting recycled into the training datasets. Even in cases where the datasets are heavily curated, ML is still going to be hamstrung because it cannot actually produce truly novel results.
There's a lot of dooming about this sort of shit - I get it, we are just in the start of what is looking like lean times - but claiming that it's already over is just outright wrong. This is pure business hype cycle shit, the same people talking this sort of shit up were drooling over how ubiquitous Amazon drone delivery would be in a decade back in 2014. Posts like this are just perpetuating the hype cycles of retarded investors and the people who are fooling them.
 
I don't buy that it will suddenly get better, you could put a gun to my head and asks me about any improvements to LLM programming in the last year and I'd just tell you to pull the trigger.
Simple minded fools will see more finely tuned outputs as evidence of better AI and trust it implicitly. Your average youtuber autistically sperging about the age of AI letting them make their own video games and movies greatly underestimates the amount of human labor and expertise that goes into making AI tools spit out anything close to useful. The primary reason chatGPT shits out anything useful is because it's been trained on basically every written piece of material that exists.

My preferred explanation for the rough generality of how machine learning works is this. Imagine a many dimensional space with various ideas floating about in it. A human would sift through this space to grab ideas, consider them individually and then in comparison with each other, produce something out of them, and then consider if it needs something else to be what the human desires. Machine learning instead reduces this many dimensional space down to something simpler using some sort of statistics model, receives a set of inputs from a user, plugs those inputs into the trained model, and then predicts what the output should be based on the statistics of the known system. This is why AI can so quickly churn out things that look reasonable at first glance but would never be produced by a conscious actor debating the merits of each element of the output and how it fits together to form a whole. As I stated previously, AI will always try to recall information already encoded in its model instead of looking it up like a conscious agent would. That sort of functionality would have to be added as an intentional human logic derived quality check algorithms. These are used to make AI's like chatGPT reject the clearly ridiculous outcomes and only return something palatable to the end user. This doesn't mean the underlying machine learning is actually better. It just means that someone's made a better way of detecting if the AI has hallucinated random nonsense in its output, like telling the user to drink bleach, writing malware, or generating images of black nazis.

Myself, bring on the AI revolution. I look forward to a world where most people think technology is magic and regress to a state of mental infancy. It'll give me job security when there's no one to replace me in 30 years because their brains are mush from letting AI do everything for them. I already consider people who let AI write their emails to be retards. When I detect chatGPT's influence in someone's writing, I can safely anticipate fuckups from them later. I haven't been proven wrong yet.
 
I’m just having a tea break and ironically I’m using an AI augmented system to model out something for a client.
It’s already inflated one number by twenty odd time, for no reason I can see, it just read ‘3’ in the input data and decided I actually meant 75. I’m not sure why. It’s not something I’m happy using, but I am no longer allowed to use the excel model I had before - despite the fact the excel model had its guts on show and I could track back the provenance of every number and see where it came from and what had been done to it. The new system is very pretty, it makes much prettier pictures than the excel, but it’s effectively a black box, and I can’t see under the hood.
Perhaps I’m just an idiot, or a Luddite, but the combo of that and the weird errors makes me a bit unsettled.
And yes! I am responsible for the numbers it spews forth! Isn’t that nice? I am reminded that I must check the working, but when I ask how I can check the working when I can’t open it up, I get angry replies.
 
CEO Dara Khosrowshahi suggested those unhappy with the changes deal with it.
I see nothing wrong with this. Work is a contract. If there is a better proposition out there, it's up to employees to go and seek for it rather than for employers to adapt if they don't think they need to.

The beauty of the free market is that it will always end up adjusting. Companies need employees and vice versa.
As he applies for new tech jobs, the 47-year-old has noticed that many companies now want applicants to do so many tasks, a position is essentially “three jobs” in one.
Does this maybe have something to do with a 47yo having to apply for jobs in tech?

With this level of seniority, I think the demand is more about the breadth of the knowledge and capacity to adapt rather than pure number of tasks. It seems weird to me he would phrase it that way.

Although the market is not as good as it once was, it's still not that difficult to get well paid positions in tech if you are good.
Advances in generative AI also play a role. Shopify CEO Tobi Lütke recently told employees that the e-commerce company won’t make new hires unless managers can prove AI isn’t capable of doing the job. Other business leaders are warning their staff to adopt more AI—or else.
This is true, but it also makes sense from an economical standpoint. That's life, and I agree many jobs are going to disappear.

I don't really want to TMI, but I can see this already where AI is actually very very good at replacing people, and I have been solicited a ton with various solutions. On the other hand, it is very useful to people who will retain their job.

For now, they come in support mostly, but I can see how this may divide the need for people by 3 or more in customer service or sales. I have seen AI solutions for customer service for example, which are very close from eliminating the need for the department altogether.

However, when it comes to tech, I am not sure we can really say it's replacing people already.

To me if it kind of feel like complaining that someone made a script to automate what they would have done manually.

This person is the worst mouthpiece ever imo, when you consider how much we've been overpaying for a while. The tech market was bound to keep evolving, and things to normalize. This guy should be the least surprised of them all.
 
With this level of seniority, I think the demand is more about the breadth of the knowledge and capacity to adapt rather than pure number of tasks. It seems weird to me he would phrase it that way.
I’ve seen this - I was offered a job last year and turned it down because it was two jobs in one, which isn’t necessarily a bad thing, it could be good, but I didn’t feel I’d personally be able to do both the way they clearly wanted and the two roles were not something you’d want one person doing. A dual role can be a great stepping stone or it can clearly be that the company is cheating out, so it’s dependent on what it is.
I know I’m doing multiple people’s jobs these days but they are all kind of under the same umbrella and it’s how things are
 
A dual role can be a great stepping stone or it can clearly be that the company is cheating out, so it’s dependent on what it is.
I think this is the key when it comes to jobs that require you to do more than one job.

I am absolutely not denying that this very much exists, and I am guilty of it as well, but I think it belongs in two different dimensions where this guy does not fit.

With young people and low experience. It really is a stepping stone in multiple ways. When you start out, you often don't even really know what you're best at, and it's often a personally enriching experience to go through to have all these tasks.

As you grow within the company, you offload the ones you are not as good at, or that can be done by someone else better, but you still have a very good understanding of the inner workings. This will make you a better manager or colleague.

If you have a high level of seniority, like a CTO position for example, then yes, you are expected to do a lot. Too much, all the time. But you are well compensated.

I think this person's problem is that they are discovering that the job market is more than all these huge companies who drove the prices up and where work is very segmented. It's a tough come back to reality, but it is the truth.

There are plenty of other positions where this guy could get hired in a second and not be required to do anything of this. His ambitions are just misaligned with the reality of the job market. It's not even the fault of AI at all.

Sorry to hear you are being forced to use AI models you are not confortable with though. I can imagine the science field is a big machine, and it's probably difficult to bring up issues up the ladder once the decision was made. Hopefully, you're not the only one having issues and this get fixed somehow. I am sure it would not be difficult to have the AI break down the logic.
 
Governments will because those retards will keep voting them in power. Here's the problem: corporations are gearing up to take control, and they would sell their fucking mother if it made their quarterly profits better. Do you really think they will put up with all those welfare niggers when they take control?
They won't have a choice, because if they tell the vast majority of the population (who will be out of work in this scenario) to fuck off and die, that population will kill them in response. The only way forward if most of the population is out of work is either welfare or disguised welfare. Anything else results in violent rebellion.
 
Back