Programming thread

I know it's not very relevant to the topic, but, what's up with a shit-ton of programmers being trannies?
Shut-in nerds whose only contact with women is via porn.

Programming isn't engineering; programming is mathematics,
This is the kind of thing people say when they don't know how to stop their leaky abstractions from pissing on the entire repo.
 
Does anyone have experience with gradle? I'm trying to build a project that requires it. After manually passing JAVA_HOME environment variables to get it working the build still fails because the gradle in the Debian (testing!) repositories is too old. "No problem," I thought, "I'll just build gradle from source and use that."

The problem I'm facing now is that the gradle git repository doesn't have any build instructions. The closest thing is instructions on building apps using gradle, not building gradle itself. The top level directory has a file called "build.gradle.kts" but I have no idea what to do with it.

This turned into more of a rant that a cry for help, but any advice is warmly welcomed.
 
Does anyone have experience with gradle? I'm trying to build a project that requires it. After manually passing JAVA_HOME environment variables to get it working the build still fails because the gradle in the Debian (testing!) repositories is too old. "No problem," I thought, "I'll just build gradle from source and use that."

The problem I'm facing now is that the gradle git repository doesn't have any build instructions. The closest thing is instructions on building apps using gradle, not building gradle itself. The top level directory has a file called "build.gradle.kts" but I have no idea what to do with it.

This turned into more of a rant that a cry for help, but any advice is warmly welcomed.
The usual method if I'm not sure is to pull down the Debian package sources and check their build stuff.

But it looks like in the zip file in the releases on github is a shell script that runs gradle.
 
Does anyone have experience with gradle? I'm trying to build a project that requires it. After manually passing JAVA_HOME environment variables to get it working the build still fails because the gradle in the Debian (testing!) repositories is too old. "No problem," I thought, "I'll just build gradle from source and use that."

The problem I'm facing now is that the gradle git repository doesn't have any build instructions. The closest thing is instructions on building apps using gradle, not building gradle itself. The top level directory has a file called "build.gradle.kts" but I have no idea what to do with it.

This turned into more of a rant that a cry for help, but any advice is warmly welcomed.
The easiest solution would be to use some kind of SDK manager with support for embedded gradle. The 'standard' tool for doing this is SDKMAN! to install a suitably modern Java version and gradle.

IntelliJ will also probably handle all of this out of the box for you if you open the gradle file as a new project. As much as I like plain text editors, big OOP languages like Smalltalk, Java, and C# really benefit from having an IDE so if you're willing to use one then I'd get one setup if you're trying to actually do Java development.
 
Is there actually any use in learning a functional language if you've been coding professionally in stuff like Python, Java, and C# for years? I've wanted to learn a functional type programming language for the longest time, but the troon/excessively woke shit makes me convinced it should be OCaml or Haskell. Probably OCaml because the community seems to be some computer science PhDs and a single hedge fund, and I've read some pretty convincing arguments that Haskell's lazy evaluation is a bad thing. It seems to hit a sweet spot of being difficult and obscure enough that it won't end up as a weird virtue signalling thing.

Lisp bugs me because everything looks the same, I've messed around with Racket and Clojure a bit but I just don't see the advantages. At least the Lisp communities are mostly old Linux guys instead of socialist-identifying troons, though.

Also I saw APL mentioned here and I have to say it's the dumbest, most annoying language I've ever worked with. The language's manual literally jerks the language off because of "mathematical purity", because they use the little x instead of * for multiplication. Literally who gives a fuck, it's all fucking symbols. If history went a little differently we'd be using Roman numerals for everything still. Nobody even in machine learning, the field literally entirely about vector mathematics, uses APL. That's almost all Python bindings to C++, with some C, some Fortran, and a tiny tiny bit of Rust work iirc.
 
Is there actually any use in learning a functional language if you've been coding professionally in stuff like Python, Java, and C# for years? I've wanted to learn a functional type programming language for the longest time, but the troon/excessively woke shit makes me convinced it should be OCaml or Haskell. Probably OCaml because the community seems to be some computer science PhDs and a single hedge fund, and I've read some pretty convincing arguments that Haskell's lazy evaluation is a bad thing. It seems to hit a sweet spot of being difficult and obscure enough that it won't end up as a weird virtue signalling thing.

Lisp bugs me because everything looks the same, I've messed around with Racket and Clojure a bit but I just don't see the advantages. At least the Lisp communities are mostly old Linux guys instead of socialist-identifying troons, though.
Are you trying to learn a functional programming language or find internet friends on Reddit? Learn whatever interests you, broadens your understanding, and/or pays your bills. Stop worrying about "The Community".
 
I'm in the process of writing a cross-platform API in C and I need a design sanity check. I know that an API is used several different ways, even on the same project. At the start you just want to get up and running as quickly as possible, then over time you want more and more control.

So right now, at the simplest level, you just need to call Init() when you start using my API and Close() when you stop using it. But let's say you want to start handling all of the memory for Widgets yourself. Right now, you have the API doing that for you, but it allocates memory as needed and you'd rather allocate it all upfront and then used as needed. The problem is, Close will run as normal, expecting the Widget memory to be in need of freeing.

Are there any recommendations for how to best handle this? Best I can come up with are some flags that I set when managing memory through the API so that Close() knows not to touch it, but that seems rather inelegant.
 
Is there actually any use in learning a functional language if you've been coding professionally in stuff like Python, Java, and C# for years?
For me the use is in scripting.

Say you're writing a project in C/C++ and bring Python in for scripting. Unless you need direct control over your code's performance it's hard to draw a limit on where Python's responsibility ends in such a project. Semantically they're very similar languages, the distinction is in the modern comforts Python tooling offers in memory and dependency management and if you can get away with it you've gotta ask, "why not just do it all in Python?"

I use Scheme (Racket or Guile) for a majority of my scripting needs. Like Python, these have nice memory and dependency management and are quick to write. They're also semantically much different from my core program and that dictates their place in my overall project. XML, JSON and any analog can be translated directly to a list of function calls in Scheme. Combine that with Schemes ability to manipulate nested hierarchies and handle errors and its role as a data pre-processor is clear. That leaves your core (imperative or OO) program free to focus on whatever you're doing with the data, not on conversion or checking your data sanity. When you're done you can even throw your result back to Scheme for serving and storing the data, because it's pretty good at that too.

Very obviously this is down to personal preference. Scheme is just another tool, albeit a tool I happen to like.
 
Last edited:
I'm in the process of writing a cross-platform API in C and I need a design sanity check. I know that an API is used several different ways, even on the same project. At the start you just want to get up and running as quickly as possible, then over time you want more and more control.

So right now, at the simplest level, you just need to call Init() when you start using my API and Close() when you stop using it. But let's say you want to start handling all of the memory for Widgets yourself. Right now, you have the API doing that for you, but it allocates memory as needed and you'd rather allocate it all upfront and then used as needed. The problem is, Close will run as normal, expecting the Widget memory to be in need of freeing.

Are there any recommendations for how to best handle this? Best I can come up with are some flags that I set when managing memory through the API so that Close() knows not to touch it, but that seems rather inelegant.
Seems to me the standard ways are to make your memory functions either a define:

C:
api.h:
#ifndef DO_MALLOC
#define DO_MALLOC api_malloc
#endif

user.h:
#define DO_MALLOC my_better_malloc
And then in your code you call DO_MALLOC which the user can override at compile time but requires the user to compile your entire API at the same time so the defines take effect.

Or, the "new" way, make the function in the API "weak" and the user can override it.

C:
api.c: (gcc code)
__attribute__((weak)) void do_malloc(stuff) {stuff}

user.c:
void do_malloc(stuff) {stuff that the user wants instead of the API version}

Then it's resolved at link time and you could distribute your API as a library without needing to be recompiled.

In your case just make the memory functions in Init() and Close() a separate function and call it one of the above ways so the user can override them, including with just a blank function if they need a no-op.
 
  • Like
Reactions: MAPK phosphatase
Is there actually any use in learning a functional language if you've been coding professionally in stuff like Python, Java, and C# for years?
Since you phrased it that way, the answer is easy: yes. If nothing else, it's another tool for your programming arsenal should you ever need it, and it gives you a cool new lens with which to view your programs and how their components interact. Sometimes a functional approach makes more sense for some problem domains than OOP.

(And, importantly, that goes vice versa too: sometimes OOP really is the right way to tackle a problem. So keep that in mind too, particularly if you end up venturing into the Haskell and OCaml communities. If you come across someone that thinks functional programming is the "one true way" to do things and that it completely supersedes OOP in every way, then you've come across a cultist.)

Also I saw APL mentioned here
Don't mind him. I'm sure he was trying to do a bit and it fell flat because he's an unfunny faggot who somehow took a wrong turn into this place when he meant to go to r/programming instead.
 
In C++ you can write multiple different bit languages. Like hexadecimal or octagon, which all translate into numbers. There are like 4 ways to write 15 in C++. It's hard to know exactly which ones to use. 18 in base 2 is 10010, the function to make sense out of that binary is 2n, meaning 2 to the power of something. For me, it's a little hard to understand how to translate binary especially since I'm so new to programming. I understand that different ways of writing numbers may be useful, but I'll just stick to writing numbers the normal way for now.
 
  • Like
Reactions: Knight of the Rope
In C++ you can write multiple different bit languages. Like hexadecimal or octagon, which all translate into numbers. There are like 4 ways to write 15 in C++. It's hard to know exactly which ones to use. 18 in base 2 is 10010, the function to make sense out of that binary is 2n, meaning 2 to the power of something. For me, it's a little hard to understand how to translate binary especially since I'm so new to programming. I understand that different ways of writing numbers may be useful, but I'll just stick to writing numbers the normal way for now.
It's actually pretty easy to know when to use stuff. Use the one that makes your job the easiest and keeps you from having to convert back and forth between different bases. If you never find yourself going "man, I wish this number was in binary", then don't use binary.
 
I understand that different ways of writing numbers may be useful, but I'll just stick to writing numbers the normal way for now.
That's the right idea, actually. It's useful to know that under the hood it's all just binary and that C++ gives you a way to directly work at the bit-level if you want (octal and hexadecimal are also mostly used when you're interested in the bits, but they're higher powers of two so you can break the numbers up into groups of 3 and 4 respectively rather than dealing with long strings of 1s and 0s all the time).

Most of the time you'll just be using 'normal' base-10 numbers.
 
In C++ you can write multiple different bit languages. Like hexadecimal or octagon, which all translate into numbers. There are like 4 ways to write 15 in C++. It's hard to know exactly which ones to use. 18 in base 2 is 10010, the function to make sense out of that binary is 2n, meaning 2 to the power of something. For me, it's a little hard to understand how to translate binary especially since I'm so new to programming. I understand that different ways of writing numbers may be useful, but I'll just stick to writing numbers the normal way for now.
Are you still a bot?
 
Lisp bugs me because everything looks the same, I've messed around with Racket and Clojure a bit but I just don't see the advantages. At least the Lisp communities are mostly old Linux guys instead of socialist-identifying troons, though.
Homoiconicity is one of the most overhyped aspects of Lisp, even if it's the reason why Common Lisp can go 25 years without an update to the standard and still easily compete. The "Lisp family" as a concept is a persistent myth by the way - Common Lisp, Scheme, Clojure and Fennel for instance have basically nothing in common except parentheses. It's like pretending that C, Java and Javascript are the same language.
 
I'm in the process of writing a cross-platform API in C and I need a design sanity check. I know that an API is used several different ways, even on the same project. At the start you just want to get up and running as quickly as possible, then over time you want more and more control.

So right now, at the simplest level, you just need to call Init() when you start using my API and Close() when you stop using it. But let's say you want to start handling all of the memory for Widgets yourself. Right now, you have the API doing that for you, but it allocates memory as needed and you'd rather allocate it all upfront and then used as needed. The problem is, Close will run as normal, expecting the Widget memory to be in need of freeing.

Are there any recommendations for how to best handle this? Best I can come up with are some flags that I set when managing memory through the API so that Close() knows not to touch it, but that seems rather inelegant.
Maintain a data structure which holds pointers to memory the API is managing. Whenever you 'export' a widget for direct memory management, remove it from that data structure. Close() only cleans up what's in that data structure.
 
  • Like
Reactions: PedoSec
In C++ you can write multiple different bit languages. Like hexadecimal or octagon, which all translate into numbers. There are like 4 ways to write 15 in C++. It's hard to know exactly which ones to use. 18 in base 2 is 10010, the function to make sense out of that binary is 2n, meaning 2 to the power of something. For me, it's a little hard to understand how to translate binary especially since I'm so new to programming. I understand that different ways of writing numbers may be useful, but I'll just stick to writing numbers the normal way for now.
You'll probably never use octal. Hexadecimal and binary notation are useful when doing bitwise operations.

For example, if you want to set the third bit of a byte called my_byte to zero, you'd use the bitwise AND (&) operator. Here's the expression with the operand written in decimal/base-10/the normal way:
my_byte = my_byte & 251;

Here it is using binary:
my_byte = my_byte & 0b11111011;

Notice how the third bit of the operand in the binary version is zero. And you're trying to set the third bit of my_byte to zero. Makes sense. If you wanted to set the fourth bit instead, you just move that zero to the left one place.

Use regular-ass numbers unless it's clear that it would make more sense to write it in a different notation. Also, you don't need to learn to convert between binary and decimal or do binary math in your head or anything. If you're on Windows, the calculator app has a programmer mode that'll handle everything.
 
Maintain a data structure which holds pointers to memory the API is managing. Whenever you 'export' a widget for direct memory management, remove it from that data structure. Close() only cleans up what's in that data structure.
That sounds like another good option. I may do some combination of this and @davids877 's solution. My ideal user-managed-memory solution is one where the user provides a pointer to some open memory and the API uses that pointer as it would any memory it allocated itself.
 
Is the demand for Java really as dead as it looks on LinkedIn? I’ve been looking around for backend and data engineering jobs in the US and Europe and nearly everything requires Python or Go, even when I specifically search for Java.

I’m perfectly fine with continuing to work in Python, but the idea of having to learn Go frightens and disgusts me.
 
Back