Programming thread

I've spent too long working with typescript and shit. I think it might be making me legitimately retarded. I'm considering running out a library or two in another language, for a project I'm consulting on, mostly for kicks (but also because it might be beneficial to have in the future, ifwhen we decide to migrate to something more robust). What I can't decide is whether to go with go, rust, or the dark horse candidate swift. It'd help if I could get an idea of which one generates the most complaints.

Or hell, maybe I'll just go back to cpp, even though I've barely touched it since university. What could go wrong? :story:
If it's code you're delivering to someone, I'd say Go.

Rust sounds interesting, but I feel like it's still in its hobbyist language stage. I don't know if it'll ever really break out of that niche.

And personally, while I'd love to use all kinds of exotic languages for professional work, I know I'll get bitched at or it'll come back to bite me in the ass at some point.

Also Go has some decent ideas (some borrowed from the hobbyist languages that I'd like to use) that solve a lot of typical headaches you see in other mainstream languages.

I really like any language that prefers error aware return types over try/catch.

Try/catch usage just encourages lazy programming. Like I can't tell you how many times I've seen people (and full disclosure, I'm guilty of this too) who just say fuck it and wrap everything with a try catch. Often there's no agreement with caller/callee about how errors are supposed to be handled. Like at work, our plugins are written by some guys and consumed by others. Sometimes the plugins promise to never throw anything. They promise to catch their errors, log them, and return null. But I see the code consuming the plugins and they wrap them in a try/catch anyway.

In Javascript, unless you read every line of code of every library you use, you have to really trust the documentation is accurate. And programmers never maintain documentation. You might as well wrap any call to any library with a try catch, just to be safe.

Some languages try to require function signatures include the types of any exceptions they throw, but that gets annoying in a different way. I think that's just an attempt to patch up a flimsy model.

Error aware return types convert the human protocol of relying on documentation into a code based protocol that forces you, with the syntax of the language, to think about and account for every possible interaction that might fail.
 
In Javascript, unless you read every line of code of every library you use, you have to really trust the documentation is accurate. And programmers never maintain documentation. You might as well wrap any call to any library with a try catch, just to be safe.
This is true of every module in whatever language you use. You will run into problems with 3rd party libraries. Diagnosing and dealing with them is a skill.

All well maintained libraries have up to date documention. They won't allow any PRs that make changes without updating the relevant documentation.
 
  • Agree
Reactions: std::string
I see a lot of promise in .NET's Blazor in WASM framework, but am hesitant to use it for new projects. It seems that the UI components can be reused in a MAUI app as well without being hosted on the web. It would be nice to have one codebase that could run as a web, desktop, and mobile app; however, I'm sure that's not how it will go down seeing the way a lot of Microsoft's previous attempts at this have flopped.
 
If you had to choose a new language to work in daily, what would you use?

A fantasy version of C++ that doesn't use C's name lookup rules, doesn't have SFINAE, had modules from the beginning and not patched in, had concepts early on and not implemented by monkeys, and...

I might be describing Swift, not sure.
 
...apart from all the others
C++ is like DSP. Everyone hates it, but it just keeps winning.

I have a question about CMake configuration. Say some system library foo provides a set of .cmake files that exposes the imported target foo::foo.

I've seen some projects do this:
Code:
3rdparty/CMakeLists.txt:

find_package(foo REQUIRED)
add_library(foo INTERFACE)
target_link_libraries(foo INTERFACE foo::foo)

src/bar/CMakeLists.txt:

add_library bar STATIC
    source1.cpp
    source2.cpp
    ...
    etc)

target_link_libraries(bar PRIVATE foo)

I've seen other projects that do this:
Code:
3rdparty/CMakeLists.txt:

find_package(foo REQUIRED)

src/bar/CMakeLists.txt

add_library(bar STATIC
    source1.cpp
    source2.cpp
    ...
    etc)

target_link_libraries(bar PRIVATE foo::foo)
Basically the first example creates a target that bar uses as a interface to foo::foo, while the second example just links bar to it directly. My question is: Is there a difference? Is one better than the other?
 
  • Thunk-Provoking
Reactions: Deep Noise
A fantasy version of C++ that doesn't use C's name lookup rules, doesn't have SFINAE, had modules from the beginning and not patched in, had concepts early on and not implemented by monkeys, and...

I might be describing Swift, not sure.
You can always use Haxe since it is a source-to-source compiler that compiles to tons of languages including C++ for generating native executables but probably it doesn't get close to a good fantasy version of C++
 
I was thinking the other day (I'm attempting to write a graphical application), that yet another bit of programming dogma fundamentally doesn't make sense. An application is a tool for manipulating a global variable - the state of the document/painting/schematic/thing you are working on. Every single button/function/subsystem needs access to the state of the application, and needs to manipulate the state of the document. (In my case, I'm not working with a literal global variable, but passing pointers to an application struct owned by the entry-point is fundamentally the same thing.)

Supposedly this is bad and we should never do this. And yet, if you try not to do it, you're going to bend your logic into byzantine pretzel knots, and if you succeed at all, you'll end up doing it anyway by stealth. Because you fundamentally can't *not* do it and end up with a tool for the manipulation of a document.

Why is there so much ... *dogma* in programming? (And why is it always so wrong?) And fanaticism? You wonder where all the medieval witch-burners went, and then you start working on a software project and encountering architecture astronauts fresh out of undergrad preaching the ONE-TRUE-WAY.

Random hot takes:

snake_case_rules, iCanNeverRembemberHowTheHellCamelCaseIsCapitalizedEspeciallyifacronymsAreINVOLVED.

Inheritance sucks. I've never seen a program improved by object heirarchies and inheritance.

std::unique_ptr solves a problem that shouldn't exist: exceptions launching the flow of your program into hyperspace as if you're still working with goto soup from the 80s.

C++ has a lot of nice tools (which I make use of), but I'm essentially writing C with some bonus features. The moment someone else joins a C++ project, it's going to mutate into a horrible object-hierarchical, template abusing, objectFactory<Factory<Factory<... temple to OCD. You can't stop them from doing this because ONE-TRUE-WAY.

Gotos might be easy to abuse, but I've seen some code that makes use of it beautifully for exception handling within functions. Much nicer to read than rats nests of conditionals. I go with conditional rats-nests out of habit, but I can see the advantages. At the end of the day, your machine code is a rats nest of gotos in some opcode soup.

For that matter, a computer is a machine for the manipulation of state. You don't want to do it in insane ways, but trying to avoid it entirely leads to madness. The only purely functional program is something that compiles to an empty file.

Yes you should reinvent the wheel.
 
Last edited:
Why is there so much ... *dogma* in programming? (And why is it always so wrong?) And fanaticism? You wonder where all the medieval witch-burners went, and then you start working on a software project and encountering architecture astronauts fresh out of undergrad preaching the ONE-TRUE-WAY.
A lot of it seems to be remnants of the OOP fad of the 90s, where (without evidence) management were convinced that it would make programmers 10x more efficient. You still have idiots like "Uncle Bob" hanging around, promoting their anti-patterns.

Inheritance sucks. I've never seen a program improved by object heirarchies and inheritance.
I don't know, it's useful for COM... but probably not much else. C++ concepts make a lot of traditional inheritance redundant.

std::unique_ptr solves a problem that shouldn't exist: exceptions launching the flow of your program into hyperspace as if you're still working with goto soup from the 80s.
It's useful for what it does, i.e. letting you use normal C pointers while still making sure the destructor gets called when it goes out of scope. It probably shouldn't be particularly common, though.
 
Most of C++'s problems come down to:

1. Stubborn insistence on using C's name lookup rules (unqualified lookup exception to ADL making this even worse)
2. Refusal to implement modules for decades
3. SFINAE
4. Committee's refusal to fix the STL because vErsIonIng iS bAd
5. Hideous syntax for checking type traits, which has infected concepts and made them syntactically horrible as well
6. Template specialization

Why is there so much ... *dogma* in programming? (And why is it always so wrong?) And fanaticism? You wonder where all the medieval witch-burners went, and then you start working on a software project and encountering architecture astronauts fresh out of undergrad preaching the ONE-TRUE-WAY.

Some people aren't smart enough to think for themselves, so fanatically adhering to someone they perceive as smart gives them a feeling of unearned intelligence. For example, John Lakos is a very smart man, but he is completely, absolutely, infinitely wrong about putting everything in the global namespace...but good luck convincing one of his disciples of that. For years, I worked for a guy who forbade the use of namespaces for little reason other than Lakos didn't like them. When your code bases is 10m+ lines, it causes nothing but problems.
 
Gotos might be easy to abuse, but I've seen some code that makes use of it beautifully for exception handling within functions. Much nicer to read than rats nests of conditionals. I go with conditional rats-nests out of habit, but I can see the advantages. At the end of the day, your machine code is a rats nest of gotos in some opcode soup.
I'm of the opinion that everyone should deal with assembler at some point. The compiler is just going to look at your code, see all the conditionals and skip the entire block with a jmp/goto anyway if the condition does/does not match. And you don't have to remember to match the 37 closing curly braces or the levels of nesting in Python.

Then again I get unreasonably annoyed when a simple program is using 19 different libraries and 3 different build tools.
 
Every programming language is the worst programming language in the world.
ANSI C is a work of art and a gift to mankind. C++ can go fuck itself with a rusty pipe.

I'm of the opinion that everyone should deal with assembler at some point. The compiler is just going to look at your code, see all the conditionals and skip the entire block with a jmp/goto anyway if the condition does/does not match. And you don't have to remember to match the 37 closing curly braces or the levels of nesting in Python.

Then again I get unreasonably annoyed when a simple program is using 19 different libraries and 3 different build tools.
I couldn't agree more. Hell, a scarily large number of CS students over the last 10 years can't read the plain English in compiler error output.

We're really starting to see the effects of letting incompetent boobs who are scared of pointers maintain a majority of the software we use today. Canadian government websites have become increasingly unusable and prone to system-wide crashes as our Pajeet population has been booming. Coincidence?

As an aside, I love using gotos for tidying up any allocated memory at the bottom of functions on errors. I got in the habit of doing it while working on kernel stuff, and it has really stuck with me.
 
Back