Vibecoding general - How to become the a 10x engineer you always knew you were despite being absolutely inept in every imaginable manner

  • 🏰 The Fediverse is up. If you know, you know.
  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
have you tried vibe avellooning? image(2).png
 
Is there any decent tooling for doing this locally without connecting to the Internet?
I know local LLMs aren't as good in general, but I am extremely wary of becoming dependent on something that can be taken away from me.
 
Is there any decent tooling for doing this locally without connecting to the Internet?
I know local LLMs aren't as good in general, but I am extremely wary of becoming dependent on something that can be taken away from me.
You need about 128GB+ of RAM minimum to have a good local coding agent.
 
there's no such thing as vibe coding. karpathy is too much of a coward to say he made a mistake letting it catch on. that in reality it should have been called hybrid programming.
because at first vibecoding literally was vibing with coding
you wanted to make yourself a little homepage or something that didnt require that much thought? sit back and vibe and watch ai code
only after that it started being used by INCOMPETENT RETARDS (and jeets) to spit in the face of computer science more than webdevs did by having 100000GB node_modules
It depends on how you direct it I suppose - There's lots of market hype over Mythos right now, which is allegedly highlighting Day 0s faster than anyone can keep up.

(im writing this reply for the third time now, for the first time i accidentally refreshed the page then i clicked cancel instead of save)
mythos isnt really all that great (kind of like how every single big llm now is a phd level mathematician and shit)
in curl for example mythos claimed to have found "confirmed five security vulnerabities, when humans have taken a look at it it turned out that only one of them was a low severity vulnerabity, three others were false positives and another one was "just a bug"
 
Last edited:
You need about 128GB+ of RAM minimum to have a good local coding agent.

I've heard qwen3.6:35b-a3b and kimi-2.5 are both pretty decent for local models, but I haven't tried either. qwen won't load on my 32GB AMD workstation card via ollama. Looks like it needs somewhere around 50GB~64GB?

Assume that isn't a problem. Also I am not asking about which specific model weights to use, but inference software with good integration into IDEs.

I've used Continue before with Jetbrains IDEs (IntelliJ, Pycharm, etc.) and it works decently well. It has a similar UI to the copilot or claude plugins and can connect to an ollama server where you can serve models from.
 
Back
Top Bottom