Programming thread

Does anyone else here hate Go's strictness on unused variables and imports, and the forced code style?
I love the language itself and the package ecosystem, I think it's one of the most productive and readable languages. But I hate the compiler. Every time I quickly want to change or comment out code to debug or whatever, some variables or imports become unused and it will refuse to compile. Which means I have to either 1. needlessly change other parts around or 2. forget about it.
It feels user hostile, they think the only way for people to write good code is to have bad code literally not compile (this is obviously not the case, you can still write buggy software in Go).
Agreed - Go is great in terms of features and power (error system is weird but I get it), but the tooling around it is annoying. I'm testing $X, I don't care about correctness!
 
One thing I never see talked about online when it comes to dependencies is the cost of using dependencies. You often hear about the cost of "reinventing the wheel" and the apparent dangers thereof, but never anything about the cost of dependencies. I'll illustrate with an example, you want to parse XML so you import an XML parser library that has 10 other dependencies, those dependencies have dependencies etc, this is potentially a super-linear curve that grows with the number of dependencies (e.g. any fucking Rust library). That's the cost in terms of compile time but the worst comes for implementation code using a library.

This is because while there are disadvantages to using third party dependencies, just as there are trade offs for every technical decision you make, the benefits of not dedicating a good portion of your engineering power to rebuilding a shittier version of something that already exists far outweigh the negatives. "Reinventing the wheel" is often discussed because it is a very common pitfall inexperienced engineering leaders fall into that engineers have to deal with years down the line. We discuss it because many of us joined a company to find that the codebase uses some shitty inhouse ORM written by someone that doesn't work at the company anymore that doesn't work half the time and doesn't support any newer features of the DB, but we can't migrate away from it because its too ingrained into everything and we don't have the engineering power to rewrite that much of the codebase.
Like you said, increases in compilation times are insignificant and often more than offset by the tests and other pipeline actions you would need to support your own library. Security issues are mostly mitigated by keeping your dependencies up to date. Also it is a guarantee you will run into the same issues and create many of the same security vulnerabilities that the mature library already encountered and dealt with.

Any sufficiently advanced library meant to handle a real workload is bound to handle every reasonable edge case that users might encounter, so the complexity of a library for general purpose use will be at least a magnitude greater than hand-written code that serves a single purpose for a single user.
Why would you not want a robust library that handles more edge cases than you would be able to think of? Your point is made much worse by using XML as an example. Implementing a subset of XML is asking for issues down the line. Either use a file format natively supported by your language or if you have to, make your own format. Don't use an existing format where your users are going to have issues inputing valid XML files.
Complexity in the solution happens because the problem you are trying to solve is not as simple as you think it is.

but what you got was 100,000+ lines of code you now depend on for parsing XML and a complex API boundary written by some do-it-for-free GitHub janny
Every big and popular open source project has a dedicated team maintaining it, companies sponsoring the development, and a robust test suite much more through than most of the companies using it. Even if the entire development team quit working on it overnight, some company is going to take over development because its critical to their products.

So what did you really gain from using the dependency? Nothing, in fact it's added negative value.
You solve a problem cleanly and easily vs wasting engineer power making something that sucks.
 
You solve a problem cleanly and easily vs wasting engineer power making something that sucks.
I encounter this shit all the time - people wanting to 'just spin up a vm' to install some random connector product that runs a business process because it's cheaper than paying for the PaaS alternative.

It's not like maintenance of that VM ends after you spin it up, and inevitably 6 months after everyone has left the company a ticket is raised along the lines of "my random BI dashboard composed of custom chewing gum and tape didn't update and I have a meeting with the board starting in 95 seconds".

@779777 for your theory reading I suggest you research TCO. Everything you do as an engineer is a trade-off but where I can, I try to write only the code I can write. It may excite you to develop that XML parser but someone will be cursing your name one day because you didn't harden yourself against the billion laughs attack or something like that.
 
I have a short degree in programming and have been working on the backend and APIs of websites for the past five years, every once in a while I start feeling inappropriate (either due to some code challenge or job requirements) and right now is such a time.
I wanna brush up on some concepts and learn others (like list comprehension, lambdas) and also read up on design patterns. What resources (books, tutorials, videos, etc) would you recommend for that sort of thing?
In the case of lambdas and list comprehension I want something that cares more about teaching the concept and how/when to implement it than something that just tells you how to do it.
 
This is because while there are disadvantages to using third party dependencies, just as there are trade offs for every technical decision you make, the benefits of not dedicating a good portion of your engineering power to rebuilding a shittier version of something that already exists far outweigh the negatives
This is such a reddit tier take, I don't know why you have to restate the same argument I explicitly mentioned is repeated ad-nauseum on the internet. Everyone has already read that shit 500 times, it's not adding anything. Honestly I got some serious Rust(tm) developer vibes when someone mentions heckin safety and ORM frameworks, plus it's just a loser mindset in general. My suggestion, hit the gym, delete reddit, learn fortran
 
This is such a reddit tier take, I don't know why you have to restate the same argument I explicitly mentioned is repeated ad-nauseum on the internet. Everyone has already read that shit 500 times, it's not adding anything. Honestly I got some serious Rust(tm) developer vibes when someone mentions heckin safety and ORM frameworks, plus it's just a loser mindset in general. My suggestion, hit the gym, delete reddit, learn fortran

I'm not a Rust developer. The 'Not Invented Here' problem has been endemic long before Rust was conceived. You read it so many times because so many people deal with the fall out from remaking shittier versions of existing software. I only responded to you because your post was one of the most retarded things I have ever read. You said you would rather have a simpler broken and incorrect implementation of a XML parser than one that works. You're a fuckin nigger and should feel bad about yourself.
 
You're a fuckin nigger and should feel bad about yourself.
"You should never write your own software, just use libraries!!!" - not a nigger, apparently. Whether it is more niggerlicious to 'npm install left-pad' or to write a 3 line function? Everyone wants to nitpick the XML example and assume you're writing an xml library that parses millions of unknown files. Nigger, what if you have one 30 line xml file, do you need an xml parsing library or do you need to have your head examined? What if it's a static corpus of xml? It's just like a Rust developer to start crying about safety and muh wheels when they don't even know the constraints or what's being built. It's just an observation that dependencies have hidden cost, but NIGGERS who write in these managed languages with package managers want to cry because their zero-cost abstractions in fact have cost that they ignore or don't understand.
 
In the case of lambdas and list comprehension I want something that cares more about teaching the concept and how/when to implement it than something that just tells you how to do it.
The most straightforward usage for lambdas would be to use them whenever you have disposable function that is used to parametrize something i.e. sorting algorithm.
Other than that it's very opinionated topic, and best practice is to just follow what is already done in your codebase.

I also would recommend watching tech talks for language of your choice. Understanding when to use something will mostly come from experience. But you can cheat a little and listen to experiences of others.
 
I want something that cares more about teaching the concept and how/when to implement it than something that just tells you how to do it.
I mainly use list comprehensions to get around looping in python, with a comprehension the python interpreter can efficiently perform the loop without all the branching/etc happening in the interpreter itself but instead uses some kind of fast path. You'll see this a lot in data science libraries where you'll pass a lambda to filter a dataset instead of writing a loop because the loop is ~100x slower for even a naive single-threaded loop, and the data science libraries are usually written in C/C++
 
more niggerlicious to 'npm install left-pad' or to write a 3 line function
what if you have one 30 line xml file, do you need an xml parsing library
People who say this usually overestimate their own skills, ability to accurately gauge problems, and determine appropriate solutions. (See next paragraph)
Would you like to post a counter-example?

Honestly I got some serious Rust(tm) developer vibes when someone mentions heckin safety
it's just a loser mindset in general
Apple fixed a serious security problem due to overfloooowing they boooooofer literally yesterday. Perhaps you should write them an email with your suggestions.
 
"You should never write your own software, just use libraries!!!" - not a nigger, apparently.
Everyone who uses a library is by definition writing their own software. The software we write has enough complexity that we don't need to add to our workload by creating and supporting worse versions of existing software.
Whether it is more niggerlicious to 'npm install left-pad' or to write a 3 line function?
JS has pad left built in.

Everyone wants to nitpick the XML example and assume you're writing an xml library that parses millions of unknown files. Nigger, what if you have one 30 line xml file, do you need an xml parsing library or do you need to have your head examined? What if it's a static corpus of xml?
Because your example is fuckin retarded. You picked one of the oldest most popular data formats, one with multiple mature parsing libraries in every language you can think of and you decided its better to write your own version.
Why are you bringing up inane situations where you only need to parse one xml file? Are you using XML to store config settings? You should have a better, more secure way of storing configs than XML. Are you asking your users to use XML to format their input? They're going to assume your application can handle XML, and probably use an editor that outputs correct XML files that your shit code will not be able to parse.

It's just an observation that dependencies have hidden cost, but NIGGERS who write in these managed languages with package managers want to cry because their zero-cost abstractions in fact have cost that they ignore or don't understand.
Don't try to weasel out of your retarded positions. You said that dependencies have negative value, which is laughable.
You haven't actually brought up any hidden costs, other than dependencies being too big, too complicated, and too correct.
 
I wanna brush up on some concepts and learn others (like list comprehension, lambdas) and also read up on design patterns. What resources (books, tutorials, videos, etc) would you recommend for that sort of thing?
In the case of lambdas and list comprehension I want something that cares more about teaching the concept and how/when to implement it than something that just tells you how to do it.
specifically on lambdas
 
RCE vun in 7zip found.
This is interesting because it was another integer error. I recall a very similar error in one of Google's services, which validated some JVM code or something like that, and which successfully tricked one of the mechanisms into accepting code it shouldn't; this is important because it was the entire basis of the underlying security mechanism. Ada lacks these issues because it forces the programmer to think about these problems, unless he specifically chooses modular arithmetic, and Lisp lacks these problems and more because it supports integers of arbitrary lengths.

I never want to use a C language library for anything, and I always choose to implement libraries I need in the language I'm writing, properly. Most often, I see the logical dependency as a giant mess, and this leads me to do nothing at all. I always win.
Apparently, compiling also extracts documentation out of the code for the inline help system. That's the kind of tight integration I was recalling. Building the system brings all kinds of additional metadata in that you just don't see on other platforms, especially proprietary ones.
This is something Lisp systems have been doing for decades. This reminds me of how unimpressed I was reading about some of the things Google's system does, such as collecting where each function is used. The Emacs command for this is M-. although it's broken on my system for some reason and I never cared to get it working.
Having said that, if you do try out Pharo, be sure to make use of the wonderful method finder
The finder appears to be implemented as an exhaustive search, interestingly. This reminds me of someone who tried to use the --help and perhaps also the --version flag on every program in his UNIX system, which somehow completely destroyed the installation. It's fair to say a Smalltalk system would actually work, however.
It may excite you to develop that XML parser but someone will be cursing your name one day because you didn't harden yourself against the billion laughs attack or something like that.
Yes, that is one problem with partial solutions. There's a class of programmer, or really smart person in general, whom I call the intelligent idiot. An intelligent idiot can always optimize something, but he can never remove it. I've seen SIMD used to parse JSON. The Linux kernel has a hyper optimized strlen implementation. Both of these absurdly optimized pieces of code always lose to a system that simply drops the asinine requirements instead. It's far too common for people to treat requirements as immovable and unchangeable, which often leads both sides to grotesque contortions; by simply changing the interface, millions of lines of code and years of wasted time can be eliminated for almost no cost.
What resources (books, tutorials, videos, etc) would you recommend for that sort of thing?
Learn Lisp. A design pattern in Lisp is called a macro, and only written once, not repeatedly.
 
The finder appears to be implemented as an exhaustive search, interestingly. This reminds me of someone who tried to use the --help and perhaps also the --version flag on every program in his UNIX system, which somehow completely destroyed the installation. It's fair to say a Smalltalk system would actually work, however.
I had something lesser happen to me once, though I didn't save it and can't replicate it now. R's syntax is sufficiently malleable that a syntax error once made it segfault on me.

Learn Lisp. A design pattern in Lisp is called a macro, and only written once, not repeatedly.
Is there a good guide out there on when to write macros vs. functions and other beginner stuff, plus how to debug them? I heard debugging Lisp macros can be pretty gnarly.
 
Is there a good guide out there on when to write macros vs. functions and other beginner stuff, plus how to debug them?
I started out by reading the second edition of Common Lisp the Language and a list of differences between that version of the language and the final standard, but most people would rather read Practical Common Lisp, which didn't impress me.
I heard debugging Lisp macros can be pretty gnarly.
Debugging them is very easy with MACROEXPAND and MACROEXPAND-1. I sorely felt their lack when doing something I describe in the next paragraph. Macros are easy to debug because it's easy to get their output. Sure, they can run arbitrary code while expanding, but that's not very common and not very hard to debug either. Special forms don't obey the regular evaluation rules of the language. Macros can be used to get most of what a special form is, by teaching the system how to transform the macro call into equivalent code. That's all it is. If the code can work fine with the normal rules of the language, then use a function. When I notice myself writing repetitive code, or code where it's easy to forget to do something, I write a macro to correct the problem.

On that note, I wrote the following C language program in an attempt to extract some useful information from the compiler without resorting to writing a C language program for an Ada binding. Here it is:
C:
#include <stdio.h>
#include <sys/socket.h>
#include <sys/types.h>

int
main ()
{
  ssize_t s;
  socklen_t t;

  if (2 == SOCK_DGRAM)
    printf("SOCK_DGRAM = 2\n");
  else
    printf("SOCK_DGRAM /= 2\n");

  if (2 == AF_INET)
    printf("AF_INET = 2\n");
  else
    printf("AF_INET /= 2\n");

  if (32 == MSG_TRUNC)
    printf("MSG_TRUNC = 32\n");
  else
    printf("MSG_TRUNC /= 32\n");

  if (_Generic(s, long: 1, default: 0))
    printf("ssize_t is long\n");
  else
    printf("ssize_t isn't long\n");

  if (_Generic(t, unsigned int: 1, default: 0))
    printf("socklen_t is unsigned integer\n");
  else
    printf("socklen_t isn't unsigned integer\n");

  return !((2 == SOCK_DGRAM)
           && (2 == AF_INET)
           && (32 == MSG_TRUNC)
           && (_Generic(s,         long: 1, default: 0))
           && (_Generic(t, unsigned int: 1, default: 0)));
}
I know this code sucks, and could be written better. While Ada provides many nice and standardized mechanisms for binding to C language code, in addition to COBOL and FORTRAN, they can be insufficient for some cases. In this case, I want to get the numerical values of what pass for macros there, which isn't too terribly difficult to do manually, but I also want to get the underlying representation of certain non-standard types, which is much more annoying. We can see how I've tortured the _Generic thingamajig, which is a silly name for what is a TYPECASE in Common Lisp, to extract the barest amount of type information. Tell me there's a better way to do this. I understand why it's like this, since a C language program has no vocabulary in which to express its own types, as most decent languages do, but this is a massive pain if I want to do anything except make certain the current configuration of the library I've written will work. It helps not at all for porting to a slightly different system. I could write a C language program only using what Ada can bind perfectly, but I don't want to do that.
 
People who say this usually overestimate their own skills, ability to accurately gauge problems, and determine appropriate solutions. (See next paragraph)
That sounds like projection, thankfully I'm extremely intelligent so you won't have to worry about that
Apple fixed a serious security problem due to overfloooowing they boooooofer literally yesterday. Perhaps you should write them an email with your suggestions.
Unfortunately we can never rewrite anything because it already exists and that would be reinventing the wheel 😞
Don't try to weasel out of your retarded positions. You said that dependencies have negative value, which is laughable.
You come in and reply to a two week old message when there are many messages directly below the post pointing out the weakness of the example and making a counter argument or agreeing with it. I don't need someone to regurgitate the common consensus that I directly referenced in my post. Obviously the outcome of any engineering decision depends on the competence of the person applying it, the value proposition depends entirely on the circumstances that the decision happens in. What you're referencing at the bottom of the post was a hypothetical example of a single dependency, not all dependencies.
You haven't actually brought up any hidden costs, other than dependencies being too big, too complicated, and too correct.
Size and complexity are by definition a cost, whether or not they are hidden depends on the experience of the person using them and if they understand the tradeoffs they are making.
JS has pad left built in.
I wouldn't know, I'm not a web developer, the leftpad thing was a meme a while ago

Tell me there's a better way to do this. I understand why it's like this, since a C language program has no vocabulary in which to express its own types, as most decent languages do, but this is a massive pain if I want to do anything except make certain the current configuration of the library I've written will work. It helps not at all for porting to a slightly different system. I could write a C language program only using what Ada can bind perfectly, but I don't want to do that.
Double post but most c compilers will have the -E flag to only preprocess a file, that will pull everything included into a single file, you could write a simple parser to find the values you need from there. There might be something else in the gcc flags to help with extracting that information, not sure
 
Double post but most c compilers will have the -E flag to only preprocess a file, that will pull everything included into a single file, you could write a simple parser to find the values you need from there. There might be something else in the gcc flags to help with extracting that information, not sure
I've already tried that. One guy wrote a sh program that took the last line of such output. It works decently for some values and not others. In Glibc, the value of MSG_TRUNC is MSG_TRUNC, because the value is also what passes for an enumeration type there. This doesn't help with the types either. I've used GNU GLOBAL and cscope, to no avail. I really shouldn't be surprised that something done automatically in Lisp would be a massive pain in the ass to do manually with the C language, but it certainly seems to be.
 
I've already tried that. One guy wrote a sh program that took the last line of such output. It works decently for some values and not others. In Glibc, the value of MSG_TRUNC is MSG_TRUNC, because the value is also what passes for an enumeration type there. This doesn't help with the types either. I've used GNU GLOBAL and cscope, to no avail. I really shouldn't be surprised that something done automatically in Lisp would be a massive pain in the ass to do manually with the C language, but it certainly seems to be.
First very high-level language

vs.

Higher-level than assembly, but not by a lot
 
In the case of lambdas and list comprehension I want something that cares more about teaching the concept and how/when to implement it than something that just tells you how to do it.
Besides what was mentioned earlier with respect to data science (numpy/scipy/pandas etc. can typically replace for loops more efficiently, though they're still an option if you get stuck or whatever), it's good to use lambdas whenever writing a very small one-line function would do the same thing. For better or for worse, lambdas are limited to one line in Python so it's pretty clear when to use them. A good example is often supplying the key predicate to sort() or sorted(). With list comprehensions it's a bit similar in that a list comprehension can be more concise than a for loop (possibly at the cost of readibility if there is nesting and/or conditions) but there are further things to consider. One of them is the possibility of making generator expressions. Instead of allocating a list all at once, generator expressions give you a generator that you creates new values "just in time", saving memory, and you can consume as much or as little of the generator as you need. You should also be aware that dictionary and set comprehensions joined the party even earlier than I thought (since Python 2.7 and in all versions of Python 3). One last little thing: when a generator expression is the sole argument to a function or method, you can leave off the parentheses.

Where does one go to start learning assembly anyways? X86 stuff I mean. I learned 6502 asm on a whim, which is pretty easy, but x86 makes my head hurt.
I've only done a little assembly but I know that x86/x64 assembly has loads of extra shit added on both for the sake of efficiency and programmer convenience, once you get used to it. Probably legacy stuff too if I had to guess. Others might be able to come up with more detailed recommendations but I saw that Modern X86 Assembly Language Programming for example has a very high average rating. (Only 11 for the third edition but the prior edition has a bunch more with a similar average.)
 
Back