Programming thread

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
For getting started with nvim I like this video, he's a maintainer for nvim. I've gone back and forth between nvim and vscode for personal reasons, currently using vscode again. The only thing holding me back from nvim is having to setup a visual debugger and learn to use it, cba, but once you get your own personal config setup it's comfy to use and faster than any other editor.

I get that videos work better for some rather than others but I prefer text, partly because I prefer videos for things like nigger crime and partly because it's much easier to replicate the text in coding examples
I would be remiss if I didn't sing the praises of Doom Emacs. It's an excellent foundation to build your config upon if you come from a vim background.
Don't tempt me with another environment when I have so much else to do, wizard
 
I get that videos work better for some rather than others but I prefer text, partly because I prefer videos for things like nigger crime and partly because it's much easier to replicate the text in coding examples
It depends, I think learning from someone explaining it verbally is a more natural way to learn and helps you gain intuition on the material faster. He has a 9 hour video reading the nvim manual which is the other side of the spectrum, also some people lack the attention span for reading unless they have to
 
I have tried many of those "vim butter better" setups a couple of years ago (neovim with plugins, preconfigured neovim like nvchad or lazyvim, vis, kakoune, doom emacs). They all have one thing in common: they're slow and buggy or unusable in some other way.

What I did, and what I recommend to everyone who just wants a good and functional text editor goes as follows.
Start with plain vim and learn the idiomatic way of using it. It's a unix-y editor and has great general text editing abilities, often replacing the need for an LSP (look at gq & formatprg for formatting, ctags and CTRL-], gd, :make, etc.). When you still aren't happy there's often a single line fix for your .vimrc.
I exclusively use vim for C and scripting, But for navigating large codebases I'll whip out a good ol' IDE or vscode, you can't really use vim/neovim/emacs to replicate an IDE.
 
you can't really use vim/neovim/emacs to replicate an IDE
I can, and I DO. :punished:
I only had 2 instances where I gave up. First time on massive C++ codebase where clangd was not able to properly index everything, but that might have changed since.
Second time is every time I have to work with Java. Like seriously fuck Java. There is always so much fucking boiler plate to anything that deals with Java that I just gave up reading through nvim-jdtls docs and used eclipse.

Though I agree, starting with bare (neo)vim is best, and then adding functionality iteratively. Some might call it agile.
And writing your config from scratch is fun learning experience anyway.
 
Vs Code works just fine tbh.

Edit so I don't make a second post. By the way, I just got my c# certification from microsoft by taking an exam.

Freelance work is ideal but its hard to get consistent income that way because Im still a novice.


Anyone who has a full time coding, whats something that you would look for on a portfolio? I have a game in the works but obviously I should also make some stuff that has more practical applications. Like if I make something specific that if Im able to do it, shows to any potential employers and clients that I do infact know how to write code (c# and javascript) and I didn't just bullshit my way through some online quiz.
 
Last edited:
Anyone who has a full time coding, whats something that you would look for on a portfolio? I have a game in the works but obviously I should also make some stuff that has more practical applications. Like if I make something specific that if Im able to do it, shows to any potential employers and clients that I do infact know how to write code (c# and javascript) and I didn't just bullshit my way through some online quiz.
I only ever hired one guy, but what I'd look for in a portfolio is a sign it wasn't bullshitted through. There are many programming courses that promise a "working project that you can show potential employers" at the end.
 
I only ever hired one guy, but what I'd look for in a portfolio is a sign it wasn't bullshitted through. There are many programming courses that promise a "working project that you can show potential employers" at the end.
So it would be good to have it up on github so people can look at each stage of development? This way they can clearly see that I wasn't bullshitting because you can look at every stage of development.
 
  • Agree
Reactions: Safir
So it would be good to have it up on github so people can look at each stage of development? This way they can clearly see that I wasn't bullshitting because you can look at every stage of development.
Yes, but what I would be particularly looking for is new features and upgrades, not parts. Real projects go from hello world to mvp to nice and polished. Fake projects go by topic, backend -> frontend -> kubernetes, full steam ahead final destination, with no trace of decisions made. It's hard to describe but MOOC projects have a smell, I did a lot of MOOCs and worked as an assessor for a while and can detect it, and I assume people who do a lot of recruiting can detect it, too. You don't need to deliberately do something to avoid the smell, your project won't have it, just push it and make sure to not have nigger variables or diagnostic penis prints.
 
Fake projects go by topic, backend -> frontend -> kubernetes, full steam ahead final destination, with no trace of decisions made
You just described every SaaS vendor I've ever had to deal with. Even better when instead of bundling their application to deploy to kubernetes, via helm or whatever, they include an entire kubernetes stack.
 
  • Horrifying
Reactions: GhastlyGhost
Once I am a rock solid expert programmer I might start making tutorial videos and talking like I'm a cave man to prove that people make shit way more complicated than it really is.
reject buzz word, become grug
Also it is nice to see another fan of Muratori; one of my recent favorites of his is here, where he briefly goes over his allocation method:
I've been fucking around writing a game engine and decided to try and get over my fear of memory management in C by writing an ECS 'system' or allocator (allocator wrapper?) as described by Andrew Kelley's video. It took me a half-hour. 30 minutes later and now I feel like everyone who has ever called memory management difficult (including myself, thirty minutes prior) is a complete retard. Memory management isn't difficult, ownership is difficult, and you have to deal with ownership when programming anything anyways.

I am probably not going to end up using C for my game - it's the little things you miss - but it was a nice learning experience.
Is is any good as a first language to learn?
The best first language to learn is one that helps you to understand the concepts of programming. For me that was Lua, but I've tried almost every language and every one of them is lacking in some way. Generally speaking you'll end up learning most or all of the popular ones for different applications, so really the best first programming language to learn is one that you understand well enough to grasp the concepts, and one that's applicable enough to what you want to do (game dev? C# for Unity, C++ for Unreal; web dev? Typescript, and eventually JavaScript) to do well.

Regarding the argument of whether you should program "in a box" or peel back the layers to see how the sausage is made, I ultimately don't think it matters. All choices about programming are ultimately choices about what bullshit you want to deal with and what you don't. The bullshit you care about you hand code, and what you don't care about you use a library for. This is true no matter what language or library or anything, because the fact is unless you spend a great deal of time learning about the specializations you don't know, someone else has probably done it better in a library. And if it happens to be that you do know about that particular field, you're probably better off writing your own anyways. This is not just true about programming, but of human society in general. What you can do, you should, and what you can't do, you can always pay someone else. (Or better yet, steal off of Github for free! Thanks for the free shit, FOSS-tards!)
Nowadays there's the ESP32 and the Pi Pico. These microcontrollers both have excellent "batteries included" Forth implementations, but the ESP32 one is implemented in C on top of the RTOS, so directly poking registers is no bueno (will generate an access violation). It's still pretty fast. The Pico implementations are bare metal though. It's really hard to overstate how powerful these dual-core (!) microcontrollers are. People implemented x86 DOS emulation on them, complete with PS/2 input and VGA output. The ESP32 one can run Windows 3.1 IIRC. The Pico's PIO can bruteforce a 640x480 DVI signal. Compared to 80s and 90s computers, they are very powerful. The only real limitation is their limited RAM. You could totally build usable general purpose computers with these $5-$10 MCUs. There's also a pretty good Lisp and even python for them. But yes, they are complex enough that it's hard for somebody inexperienced to completely grasp them on a hardware level. Also you have to realize that back in the day, whatever you could do on home computers was state of the art. As impressive as these MCUs are, they pale to what you can do on a modern computer with e.g. the unreal engine. It'd be hard to draw people into this. I guess these times are just gone.

I've been looking for a cheap micro for a while now just to hack with but prices always seem ridiculous. You've sold me on the Pico Pi.
In the latter case, maybe someone could school me, but I'm not really convinced that prototypes are vastly better than classes.
"Do you know what people generally do with all of the power and flexibility of prototypes?  . . . They use them to reinvent classes." - Robert Nystrom, Crafting Interpreters

Prototypes (and Classes, and
Object Orientation
, for that matter) is a bad meme.
What's the best way to dive headfirst into learning cooooooooooding?

(I want to learn all the common C languages)
Just learn C. Or if you're a chicken learn Go, which is C for chickens. (After reading your later posts, if you're looking to code games, just learn C# and code in Unity or Godot and suffer with @${Sandy}
So instead of using rsplit(), why not have something like split(from_start=False)? Something like that.
Disregard the backwards compatibility argument, and also, I understand the answer, but I don't see the big deal with having it as in the hypothetical. Is there something else as a reason, or just this?
Generally, you should never have a function with a binary flag. Split it into two separate functions, and have the caller determine which function to call. I forget the exact reasons why, or where I learned it, but generally it's separation-of-concerns: if a function does one thing based on a switch, and another thing if that switch is off, those are two different functions, even if they do something similar under the hood.

More ideally, you can extract out the shared part of the function into a separate private function internal to that module (whether your module is an object or namespace or etc.), and have the two split functions call that private function. This way the shared part is more maintainable, since you don't repeat yourself. This also helps with legibility, since you know what a function call is going to do without having to think about the status of its flags. Moving the branch to the outer scope/caller makes it more clear that there's a branch there, and that things will work differently.
This makes me want to go and fully learn a functional language. Any suggestions? From what I have read, people seem to like scheme and ocaml a lot.
I enjoyed Elixir in my brief time learning it, but I don't know that it's fully functional. My understanding from the Lisp propaganda was that Lisp was the be all end all of functional programming... Haskell, maybe?

Finally made it to the bottom of the thread.

I'm considering writing my own programming language (who hasn't), but I'm stuck on what I should target with the back-end. Transpilation seems to be increasingly popular, and I could write emitters for C and JavaScript and that would cover like 99% of real-world use cases, but at the same time a virtual machine or even real compilation w/ LLVM is equally compelling. After overcoming my fear of memory, I like raw memory access more and more, especially after learning about Data-Oriented Design and the advantages that it brings, but the pain is that everything that touches the system has to at some point touch C, even if just for a compatibility layer.

I've also been at pain wondering if I should just write the lexer myself, or try and fuck with a compiler compiler, for which I'm truly at a loss.

Maybe I'll write it as a compiler for the Pico, lol.
 
Generally, you should never have a function with a binary flag. Split it into two separate functions, and have the caller determine which function to call. I forget the exact reasons why, or where I learned it, but generally it's separation-of-concerns: if a function does one thing based on a switch, and another thing if that switch is off, those are two different functions, even if they do something similar under the hood.
That's one of the good pieces of advice from Clean Code iirc.
 
Why all the hate for OOP as a general concept that I see?
I generally believe that when people say they hate OOP it usually means they hate Jave and/or C++.

Also OOP due to relying heavy on abstractions/indirections and objects being responsible of how they are represented in memory are not really playing well in modern reality where CPU IPCs and Clocks escaped far from RAM speeds.
 
I generally believe that when people say they hate OOP it usually means they hate Jave and/or C++.

Also OOP due to relying heavy on abstractions/indirections and objects being responsible of how they are represented in memory are not really playing well in modern reality where CPU IPCs and Clocks escaped far from RAM speeds.
Basically I'm of the opinion that all of the major paradigms are only bad when taken too far or in the wrong direction. I like having the multi-paradigm approach you get in languages like Python and OCaml.
 
@Private Tag Reporter

Why all the hate for OOP as a general concept that I see?
OOP is very much a programmer's solution to a programmer's problem: it over-engineers a solution to a problem. In addition, like most things in programming, its proliferation is very much a historical quirk (thanks, Java! ).

C has a pretty central problem in that it's not encapsulated. Any code anywhere can allocate memory, and if it's particularly rude, it can allocate it and not clean-up, or cause undefined behaviour, or any matter of fucked up shit. On top of that, there is no namespaces, so if you happen to name something with a common name, and somewhere else something shares that name, uhhh go fuck yourself I guess? So C++ fixes most of that shit, but still compiles down to machine code and has to deal with headers and a whole bunch of legacy shit.

A whole bunch of legacy shit is why modern programming is so pozzed forever, since everything's got to be backwards compatible or else adoption is a major risk.

Java's major contributions were going headerless and having a virtual machine instead of compiling to machine code. The virtual machine thing is what made it super popular for business use, since it fulfilled the promise of "write once, run anywhere" except no for real this time guys really! and getting away from headers is just a general blessing we should all be thankful for, but is more of a natural progression of programming languages that probably would have evolved convergently somewhere else if it weren't in Java. These factors made it extremely popular with businesses and created the conditions for today's pajeet-death-spiral in coding, since businesses want cookie cutter code and cookie cutter programmers, which is what OOP delivers on.

Programming is essentially an ordered description of mutations on state. OOP tried to encapsulate all of the possible mutations of a state into an object. This is fine and dandy for well defined, concrete objects. Vectors, for example, are perfectly fine as an object, since they are well defined mathematical constructs. There is a fixed amount of operations you can do on a vector.

In reality, however, the various operations you might want to perform can and will change rapidly with specifications, and so OOP has to accommodate that. Then comes the question of what object/class should own which functions/state, and then you're back at the ownership problem again. Except now it's mired in a ton of abstraction over boilerplate code and design patterns and weird verb-noun relations. Sometimes, it's fine for data to be data, and functions to be functions. (This doesn't even get into the anti-pattern of inheritance, where composition is just generally better). In addition, OOP never really lived up to modelling real-world problems like is so often used to teach and describe OOP (mostly an educational problem). A Cat inherits an Animal which inherits a Creature which... etc. But does a cat meow(), or does it have a DoHaveMakeSound.SoundFactory(new Sound("Meow")) ?

However, it's still nice generally to group similar or related functions logically in some way, so OOP kind of awkwardly persisted until just recently, almost in spite of itself. In addition, the technological advancements that put OOP at a disadvantage (CPU cache outspeeding RAM) are relatively new developments and took time for people to recognize and adapt to. Since OOP tends to shit out allocations wherever the fuck in non-contiguous memory, it will generally be slower when not properly optimized. This is usually not a problem, but it seems like developers as a group are learning to respect the cache and take hardware seriously.

Nowadays, from a language design standpoint, there's namespaces/packages/modules/whatever-you-wanna-call-it which allows for a logical grouping of functions without necessarily being object-bound. There are generics, interfaces, concepts etc. which generally allow you to work with similarly shaped data regardless of concrete typing. There's Zig and Go's 'structs-with-methods' (definitely not objects we swear), which are just as good as objects without dealing with the overhead and boilerplate code. (I am personally a huge fan of Go's implicit interfaces). Data-Oriented Design is picking up steam as the 'correct' approach, at least for systems oriented code, which means we're back to C (or Zig or Rust whenever those are stable...).

Multi-paradigm is the correct approach. Functional programming probably gets a bad rep because quote unquote "functional languages" look retarded, so most people take a look at an S-expression and go 'what the fuck is that', but higher order functions like applying a function to a list is honestly so elegant it's hard not to want it in every programming language. If we go back to my original analogy of programming being an ordered mutation of state, pure functions are generally safer, easier to write and maintain, because they don't do shit. No hidden allocations, no bullshit. Just input and output. Unfortunately, when people make a "pure functional" programming language, you can't do shit with it, because something like, I don't know, a web server or a main loop in a video game is an intentionally produced side-effect. A purely functional environment where everything you can do is through message passing exists, but is weirdly alien. We're programmers for God's sake, not mathematicians, I don't want to do calculus! The worst part of it is when you start storing state in functions with closures. Congratulations asshole, you've invented the object!

The solution is actually taking the best features from each paradigm, and applying them correctly to the task at hand. In that sense there is no real paradigms, to get back to my original thesis, there's just state, and just behaviour. Manage them how you will. At the end of the day most of this is theory/design, and what's more important is just meeting requirements.

My personal preference, in order, is:
Declarative (Pure State, Implies Behaviour)
Functional (Pure Behaviour)
Procedural/Imperative (Mutations)
Object-Oriented (For Well Defined Objects/Domains, prefer Composition > Inheritance)

And actually, before any of that I prefer to work out the problem with pen and paper. Good old handwritten logic.

Digressing, the ownership problem is probably the real "hard problem" of computer science. It's such a clumsy problem that they've invented a whole language (Rust) around kludging the issue instead of accepting the superior answer of garbage collection. That's mostly a joke at Rust's expense.

EDIT: Just read Squishie's post. I neglected to mention that's another huge issue with OOP, when inheritance gets that deep you may as well be Columbus charting America trying to find any meaningful behaviour or implementation or what the fuck anything actually is.

FURTHER EDIT: In researching for this effortpost, I found out that OCaml is apparently good for writing interpreters, which if that's the case, I very well might try my hand at it for my toy language. Will report back.
 
Last edited:
Back