what people spend $100 now and they say it spent it in five minutes, that $100 will be $10000 in the future.
I agree on this entirely. There is no way even those $20/month Copilot business plans are covering their own costs. SORA was allegedly losing millions per day when OpenAI shut it down. (why they didn't try a $500/month price boost first, who knows. A lot of people would have paid it).
I've got a der8auer Wireview on my GPU. I see the crazy amounts of power my GPU takes up when I use ComfyUI to make images or video on my regular consumer card. These large nvidia enterprise racks can render what takes my machine 5 minutes in 10 seconds, for hundreds of users. Even the sunk costs alone for these machines are $50k/$100k per unit. There's no way they're making that back yet, much less the current power costs.
When OpenAI crashes out, you know Anthrophic will be able to charge whatever they want. They've already experimented with A/B testing, putting Claude Code in the $100 plan for some people (covered by Louis Romman and others).
there's a whole technique people don't understand. most of you know less than 10% of how to use it the way i do. even people close to me who i know are smart are sucking at it, when prices go up they'll be fucked. brain always wins, it's still too efficient.
The Caveman skill in your context can drop output token usage considerably:
Not once did copilot or GPT ever produce anything even remotely usable or intelligent for me. Then again, the software I was working on was proprietary and it was a good year ago. Maybe things have magically advanced since. I remain skeptical.
It has gotten far better than a few years ago. It can produce decent stuff most of the time, but it still gets things horribly wrong a lot. It can easily spaghettify everything even with well planned out specs and rules listed in your context.
At my job, I'm constantly leaving 30~50 of comments on some code reviews that are just filled with so much garbage. I run into things that get past code review all the time, like tests that copy/past database queries into the tests and don't actually call the real code. Mocks that mock what is being tested. Horrors.
It's useful for small, well definable segments, but a good developer is always going back over what's generated. If it's for work, I'm going to make sure it's not shitty, unless it's just tests. If it's a personal project that's not a throw-away, I'm often rewriting over 1/3 of what's generated so it's not shit.
I like vibecoding but it's a fool's errand to have it do architecture for you. Works very well for when you have specific busywork contextless functions you need to write
It's been okay for rapid prototyping, to show what can be done on a real dataset before architecting the real thing. It's okay if you need a one-time migration tool that goes from x->y. But for anything you want to use long term, if you're not going back through and cleaning up/refactoring by hand, you're going to have a maintenance nightmare.