Vibecoding general - How to become the a 10x engineer you always knew you were despite being absolutely inept in every imaginable manner

  • 🏰 The Fediverse is up. If you know, you know.
  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
Not once did copilot or GPT ever produce anything even remotely usable or intelligent for me. Then again, the software I was working on was proprietary and it was a good year ago. Maybe things have magically advanced since. I remain skeptical.

As it stands now, AI seems really good for 1) translating between languages, 2) getting a quick summary of large sections of code, 3) troubleshooting problems on Linux. It seems completely useless for producing actual real software beyond toy projects like OP's.

And let's be real- it will never be able to center a div.
 
Not once did copilot or GPT ever produce anything even remotely usable or intelligent for me. Then again, the software I was working on was proprietary and it was a good year ago. Maybe things have magically advanced since. I remain skeptical.

As it stands now, AI seems really good for 1) translating between languages, 2) getting a quick summary of large sections of code, 3) troubleshooting problems on Linux. It seems completely useless for producing actual real software beyond toy projects like OP's.

And let's be real- it will never be able to center a div.
The agents are designed to produce unreadable code so that you're locked into their platform. It produces black boxes of nonsense unless you give it specific architectural instructions, and even then it will find every excuse to ignore directives and build things in the most convoluted and unmaintainable way possible.

I straight up had to walk both Claude & GPT through how to use a lib/ folder for reusable PHP libraries. I've been hitting the 5hr limits of both platforms just trying to organize a simple lib/ folder. I've been organizing this lib/ folder for two months. The clankers will generate giant complex garbage apps without breaking a sweat, but as soon as you start trying to consolidate reusable functions it goes NOOOOOOOOOOOOO I NEED ALL THE TOKENS JUST TO DO A RENAME. Then it will ignore how you told it to do the rename, and it will eat up twice as many tokens just fixing the mistake.

If I didn't know anything about PHP before this I would be completely lost.
 
I like vibecoding but it's a fool's errand to have it do architecture for you. Works very well for when you have specific busywork contextless functions you need to write, much faster to just write a function stub with a xmldoc comment like 'iterate this set of sets filtering by x and y, collect this attribute into a list, return it sorted by y' and prompt the LLM to finish the function than type it out by hand and I can tell at a glance if the output is correct since it'd be 99% inline with whatever I would've written.

Also just use qwen3-coder-next with good quants if you're on a 12GB gpu. Turnaround for a prompt like that on my 4070 is about 20 seconds.
 
Last edited:
> "Please extract functions x, y & z from class A and put them into class B, update all callers"
> Claude moves class A to the same folder as class B, gives it a goofy name, ignores the actual command.
> Doesn't update callers but throws an extra alias shim into the system, doesn't record where it puts the shim.

I don't understand why this is so complicated for what's supposed to be one of the most powerful models on the market.
 
what people spend $100 now and they say it spent it in five minutes, that $100 will be $10000 in the future.

I agree on this entirely. There is no way even those $20/month Copilot business plans are covering their own costs. SORA was allegedly losing millions per day when OpenAI shut it down. (why they didn't try a $500/month price boost first, who knows. A lot of people would have paid it).

I've got a der8auer Wireview on my GPU. I see the crazy amounts of power my GPU takes up when I use ComfyUI to make images or video on my regular consumer card. These large nvidia enterprise racks can render what takes my machine 5 minutes in 10 seconds, for hundreds of users. Even the sunk costs alone for these machines are $50k/$100k per unit. There's no way they're making that back yet, much less the current power costs.

When OpenAI crashes out, you know Anthrophic will be able to charge whatever they want. They've already experimented with A/B testing, putting Claude Code in the $100 plan for some people (covered by Louis Romman and others).

there's a whole technique people don't understand. most of you know less than 10% of how to use it the way i do. even people close to me who i know are smart are sucking at it, when prices go up they'll be fucked. brain always wins, it's still too efficient.

The Caveman skill in your context can drop output token usage considerably:


Not once did copilot or GPT ever produce anything even remotely usable or intelligent for me. Then again, the software I was working on was proprietary and it was a good year ago. Maybe things have magically advanced since. I remain skeptical.

It has gotten far better than a few years ago. It can produce decent stuff most of the time, but it still gets things horribly wrong a lot. It can easily spaghettify everything even with well planned out specs and rules listed in your context.

At my job, I'm constantly leaving 30~50 of comments on some code reviews that are just filled with so much garbage. I run into things that get past code review all the time, like tests that copy/past database queries into the tests and don't actually call the real code. Mocks that mock what is being tested. Horrors.

It's useful for small, well definable segments, but a good developer is always going back over what's generated. If it's for work, I'm going to make sure it's not shitty, unless it's just tests. If it's a personal project that's not a throw-away, I'm often rewriting over 1/3 of what's generated so it's not shit.

I like vibecoding but it's a fool's errand to have it do architecture for you. Works very well for when you have specific busywork contextless functions you need to write

It's been okay for rapid prototyping, to show what can be done on a real dataset before architecting the real thing. It's okay if you need a one-time migration tool that goes from x->y. But for anything you want to use long term, if you're not going back through and cleaning up/refactoring by hand, you're going to have a maintenance nightmare.
 
The Caveman skill in your context can drop output token usage considerably:

https://github.com/JuliusBrussee/caveman
This is hilarious. Caveman Claude: "Bug in auth middleware. Token expiry check use < not <=. Fix:"
1777820619555.png
 
The agents are designed to produce unreadable code so that you're locked into their platform. It produces black boxes of nonsense unless you give it specific architectural instructions, and even then it will find every excuse to ignore directives and build things in the most convoluted and unmaintainable way possible.
Agents? Reading code!? Locked platform? What are you even talking about?

Just draw files into the chat window in the browser on all the free models and take the most fitting code.

Fix it! Show before and after code. DO NOT HALLUCINATE!

Feels like this thread does not really understand the v i b e in vibe coding.

The Caveman skill in your context can drop output token usage considerably:
Limit reached? Well I guess the good old VPN switch and burner account nr #4 will get the job done. Don't waste context on a fucking wall of text meme that could be used on actually reading the code instead.
 
Last edited:
Claude sonnet went downhill very fast lately, a month or so, where I now have to repeteadly scream at it to do x before it does x. Before that, it achieved whatever task I gave it pretty good, whereas now it will straight up hallucinate stuff in the code that just isn't there or architect solutions that fix issues I never had.
I don't pay for shit btw. 4 free accounts on rotation.
 
I don't pay for shit btw. 4 free accounts on rotation.
Preach brother. PREACH! AMEN! Make sure it's not too identical prompts, make it look like your are "team" working on a project so it doesn't trigger a ban by pattern.


I did pay to try the Opus hype, it hallucinates even more than Sonnet, but can really do some heavy lifting in some circumstances. Don't pay unless you have really maxxed out every single model on the market and know to nudge them well, because Opus can destroy everything if you are not careful.
 
Just draw files into the chat window in the browser on all the free models and take the most fitting code.
Nigga I have a codebase that consists of over 400 php files. I'm not copypasting code files one-by-one out of my browser into the terminal.

I did pay to try the Opus hype, it hallucinates even more than Sonnet, but can really do some heavy lifting in some circumstances. Don't pay unless you have really maxxed out every single model on the market and know to nudge them well, because Opus can destroy everything if you are not careful.
Can confirm, Opus is garbage. Not worth the extra token consumption at all.
 
Why does all AI suck total balls at Cloudformation and Terraform? You'd think this would be much simpler, easier, and more deterministic, but I've never had it get even basic stuff remotely right.
 
Why does all AI suck total balls at Cloudformation and Terraform? You'd think this would be much simpler, easier, and more deterministic, but I've never had it get even basic stuff remotely right.

Probably not nearly as much open source terraform training data out there. A lot of the more complex TF is going to be internal/proprietary to companies. It doesn't help that Hashicorp fucked the entire project with their licenses changes and is now owned by IBM.

Are you using actual Terraform, or OpenTofu? OpenTofu might give you better prompt results. A lot of the industry has moved to it, and they've implemented a lot of basic stuff (like better loops) that Hashicorp has refused to allow in for years.
 
Scroll the AI benchmark leaderboards and find one that's open source. There's one I'm looking at that says Kimi K2.6 is the best open source one at the moment.
 
I've been using opencode with their default model which is free. I am using Python/Go and TypeScript in the few projects I am working on.

The code is reasonably good IMO. It does need guiding but you can get it to build a working prototype pretty quickly.

Used it at work as well. It can understand out large AWS codebase without issue.

I have barely written any code in the last month.
 
Using openAI codex with some recentish version of chatgippty:

- Make a small filemanager application.
- Complain it only lets me select a single file/folder.
- It did this for safety reasons...
- I did not bother to point out it deletes entire subfolders without confirmation...
 
Back
Top Bottom