- Joined
- Jun 30, 2023
I would be remiss if I didn't sing the praises of Doom Emacs. It's an excellent foundation to build your config upon if you come from a vim background.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
I get that videos work better for some rather than others but I prefer text, partly because I prefer videos for things like nigger crime and partly because it's much easier to replicate the text in coding examplesFor getting started with nvim I like this video, he's a maintainer for nvim. I've gone back and forth between nvim and vscode for personal reasons, currently using vscode again. The only thing holding me back from nvim is having to setup a visual debugger and learn to use it, cba, but once you get your own personal config setup it's comfy to use and faster than any other editor.
Don't tempt me with another environment when I have so much else to do, wizardI would be remiss if I didn't sing the praises of Doom Emacs. It's an excellent foundation to build your config upon if you come from a vim background.
It depends, I think learning from someone explaining it verbally is a more natural way to learn and helps you gain intuition on the material faster. He has a 9 hour video reading the nvim manual which is the other side of the spectrum, also some people lack the attention span for reading unless they have toI get that videos work better for some rather than others but I prefer text, partly because I prefer videos for things like nigger crime and partly because it's much easier to replicate the text in coding examples
I can, and I DO.you can't really use vim/neovim/emacs to replicate an IDE
I only ever hired one guy, but what I'd look for in a portfolio is a sign it wasn't bullshitted through. There are many programming courses that promise a "working project that you can show potential employers" at the end.Anyone who has a full time coding, whats something that you would look for on a portfolio? I have a game in the works but obviously I should also make some stuff that has more practical applications. Like if I make something specific that if Im able to do it, shows to any potential employers and clients that I do infact know how to write code (c# and javascript) and I didn't just bullshit my way through some online quiz.
So it would be good to have it up on github so people can look at each stage of development? This way they can clearly see that I wasn't bullshitting because you can look at every stage of development.I only ever hired one guy, but what I'd look for in a portfolio is a sign it wasn't bullshitted through. There are many programming courses that promise a "working project that you can show potential employers" at the end.
Yes, but what I would be particularly looking for is new features and upgrades, not parts. Real projects go from hello world to mvp to nice and polished. Fake projects go by topic, backend -> frontend -> kubernetes, full steam ahead final destination, with no trace of decisions made. It's hard to describe but MOOC projects have a smell, I did a lot of MOOCs and worked as an assessor for a while and can detect it, and I assume people who do a lot of recruiting can detect it, too. You don't need to deliberately do something to avoid the smell, your project won't have it, just push it and make sure to not have nigger variables or diagnostic penis prints.So it would be good to have it up on github so people can look at each stage of development? This way they can clearly see that I wasn't bullshitting because you can look at every stage of development.
Pssh. What do you know.and make sure to not have nigger variables or diagnostic penis prints.
You just described every SaaS vendor I've ever had to deal with. Even better when instead of bundling their application to deploy to kubernetes, via helm or whatever, they include an entire kubernetes stack.Fake projects go by topic, backend -> frontend -> kubernetes, full steam ahead final destination, with no trace of decisions made
reject buzz word, become grugOnce I am a rock solid expert programmer I might start making tutorial videos and talking like I'm a cave man to prove that people make shit way more complicated than it really is.
I've been fucking around writing a game engine and decided to try and get over my fear of memory management in C by writing an ECS 'system' or allocator (allocator wrapper?) as described by Andrew Kelley's video. It took me a half-hour. 30 minutes later and now I feel like everyone who has ever called memory management difficult (including myself, thirty minutes prior) is a complete retard. Memory management isn't difficult, ownership is difficult, and you have to deal with ownership when programming anything anyways.Also it is nice to see another fan of Muratori; one of my recent favorites of his is here, where he briefly goes over his allocation method:
The best first language to learn is one that helps you to understand the concepts of programming. For me that was Lua, but I've tried almost every language and every one of them is lacking in some way. Generally speaking you'll end up learning most or all of the popular ones for different applications, so really the best first programming language to learn is one that you understand well enough to grasp the concepts, and one that's applicable enough to what you want to do (game dev? C# for Unity, C++ for Unreal; web dev? Typescript, and eventually JavaScript) to do well.Is is any good as a first language to learn?
Nowadays there's the ESP32 and the Pi Pico. These microcontrollers both have excellent "batteries included" Forth implementations, but the ESP32 one is implemented in C on top of the RTOS, so directly poking registers is no bueno (will generate an access violation). It's still pretty fast. The Pico implementations are bare metal though. It's really hard to overstate how powerful these dual-core (!) microcontrollers are. People implemented x86 DOS emulation on them, complete with PS/2 input and VGA output. The ESP32 one can run Windows 3.1 IIRC. The Pico's PIO can bruteforce a 640x480 DVI signal. Compared to 80s and 90s computers, they are very powerful. The only real limitation is their limited RAM. You could totally build usable general purpose computers with these $5-$10 MCUs. There's also a pretty good Lisp and even python for them. But yes, they are complex enough that it's hard for somebody inexperienced to completely grasp them on a hardware level. Also you have to realize that back in the day, whatever you could do on home computers was state of the art. As impressive as these MCUs are, they pale to what you can do on a modern computer with e.g. the unreal engine. It'd be hard to draw people into this. I guess these times are just gone.
"Do you know what people generally do with all of the power and flexibility of prototypes? . . . They use them to reinvent classes." - Robert Nystrom, Crafting InterpretersIn the latter case, maybe someone could school me, but I'm not really convinced that prototypes are vastly better than classes.
Just learn C. Or if you're a chicken learn Go, which is C for chickens. (After reading your later posts, if you're looking to code games, just learn C# and code in Unity or Godot and suffer with @${Sandy}What's the best way to dive headfirst into learning cooooooooooding?
(I want to learn all the common C languages)
Generally, you should never have a function with a binary flag. Split it into two separate functions, and have the caller determine which function to call. I forget the exact reasons why, or where I learned it, but generally it's separation-of-concerns: if a function does one thing based on a switch, and another thing if that switch is off, those are two different functions, even if they do something similar under the hood.So instead of using rsplit(), why not have something like split(from_start=False)? Something like that.
Disregard the backwards compatibility argument, and also, I understand the answer, but I don't see the big deal with having it as in the hypothetical. Is there something else as a reason, or just this?
I enjoyed Elixir in my brief time learning it, but I don't know that it's fully functional. My understanding from the Lisp propaganda was that Lisp was the be all end all of functional programming... Haskell, maybe?This makes me want to go and fully learn a functional language. Any suggestions? From what I have read, people seem to like scheme and ocaml a lot.
That's one of the good pieces of advice from Clean Code iirc.Generally, you should never have a function with a binary flag. Split it into two separate functions, and have the caller determine which function to call. I forget the exact reasons why, or where I learned it, but generally it's separation-of-concerns: if a function does one thing based on a switch, and another thing if that switch is off, those are two different functions, even if they do something similar under the hood.
I generally believe that when people say they hate OOP it usually means they hate Jave and/or C++.Why all the hate for OOP as a general concept that I see?
Basically I'm of the opinion that all of the major paradigms are only bad when taken too far or in the wrong direction. I like having the multi-paradigm approach you get in languages like Python and OCaml.I generally believe that when people say they hate OOP it usually means they hate Jave and/or C++.
Also OOP due to relying heavy on abstractions/indirections and objects being responsible of how they are represented in memory are not really playing well in modern reality where CPU IPCs and Clocks escaped far from RAM speeds.
It also adheres to the Unix principle "Do one thing and do it well." Modularity is key to managing scale.That's one of the good pieces of advice from Clean Code iirc.
OOP is very much a programmer's solution to a programmer's problem: it over-engineers a solution to a problem. In addition, like most things in programming, its proliferation is very much a historical quirk (thanks, Java! ).