Programming thread

I'm not saying abstraction isn't useful or necessary. I'm saying I detest your "let's program at the highest level possible" attitude when so often that just means wasteful academic masturbation for no benefit whatsoever. I'm angry that people like you keep pushing this degenerate meme and diluting the sensible voices. You are pollution.

If you manage to get your game running at a decent speed using shitty programming techniques, cool. Now imagine how much more you could have done with that computing time if you hadn't wasted it by pretending your dogma made your life easier. Maybe you did everything you wanted to do, or maybe you made compromises because you have no idea how to write performant code. Your example doesn't particularly interest me because I'm more interested in the general phenomenon of FP weenies and OOPsies retards spreading dogma and fear to the younger generation and ensuring they won't be able to write performant software either.



It's not inconvenient to me at all because I think "the most popular languages of today" are all slow pieces of shit.
I hope you program exclusively in assembly, then
 
MasturbatoryFactoryOverabstractionFactorySingleton
Absolutely, but do people still do that stuff? A lot of the worst abuses of OOP are due to people trying to write other languages in C++. These days I think people are more willing to just use another language - plus the C++ standards people are more willing to add features into the language, for better or for worse.

The last Factory I came across in the wild must have been from 20-year-old code.
 
  • Like
Reactions: Knight of the Rope
Absolutely, but do people still do that stuff? A lot of the worst abuses of OOP are due to people trying to write other languages in C++. These days I think people are more willing to just use another language - plus the C++ standards people are more willing to add features into the language, for better or for worse.
I'm a C++ programmer and the OOP abuse I've typically seen came from people exposed to too much Java or C#. Rule of thumb regarding C++ code:
  • If you see too much abstraction, this is a Java or C# programmer forced to do C++.
  • If you see too little abstraction and a peculiar naming style (sz == "size", fn == "filename"), this is a C programmer forced to do C++.
  • If you see too much templates, this is a C++ programmer who either enjoys C++ too much or thinks it should be more functional.
 
Last edited:
Why is it that Python is seemingly always favoured over Ruby? Is it simply critical mass or is there some advantage to Python that I've been missing all this time? Except for Rails and web design (the niggercattle of coding) it seems like Ruby's pretty well ignored.
Ruby wants to be Perl so bad, but is too cool and aloof to admit it. That'll scare away the hipsters to Python and the maniacs back to Perl.
 
This may sound inconvenient, but I have a veritable army of very smart grad students
Based. "Grad Student" is the best programming language of them all.

Your example doesn't particularly interest me because I'm more interested in the general phenomenon of FP weenies and OOPsies retards spreading dogma and fear
I'm actually curious now: what kind of full-scale, large, multi-person software development is even possible without at least one of FP/OOP?
 
I hope you program exclusively in assembly, then

I already said I think all of the popular languages are slow pieces of shit. That includes C. What you're arguing right now is: "You're probably not doing the most optimal thing, so that's justification for me to do less optimal things!" That's retarded, and because you don't understand why I'd want to fuck with the implementation details you can't even stand on the blub argument because you're getting blubbed.

What I'm doing about this problem is I'm working on my own programming language. It's also a slow piece of shit right now, but it will get better, and in the mean time it solves a lot of complexity problems for me. For examples: It has no undefined behavior, it has excellent primitives, and the grammar isn't a convoluted mess. This makes it easier to do the kind of programming that I want to do.

I wasn't aware there was any serious problem with OOP dogma these days. What sort of things did you mean?
MasturbatoryFactoryOverabstractionFactorySingleton
Absolutely, but do people still do that stuff? A lot of the worst abuses of OOP are due to people trying to write other languages in C++. These days I think people are more willing to just use another language - plus the C++ standards people are more willing to add features into the language, for better or for worse.

The last Factory I came across in the wild must have been from 20-year-old code.

So first up, shit like MasturbatoryFactoryOverAbstractionFactorySingleton exists because Java has some painful holes in its design that require people to jump through ridiculous hoops. It's bonkers, but it's mostly a Java thing. What I'm talking about is the less offensive SOLID stuff, which causes way more problems than it solves. Let's take a look:

Single Responsibility Principle: Everyone's already bitched about how vague this is, so I'll leave that point alone. What irks me about SRP is that it advocates for putting as many hoops between the programmer and his work as possible. You can't just chuck a thousand lines of straight-line code into a function and call it a day, you've gotta refactor it so the logic is spread out over as many functions, classes, files, and servers as possible! A single indirection might not make your code noticeably slower, but an endless slog of indirection absolutely will, and even worse it impairs a programmer's ability to reason about his code because to read it he needs to scroll and page through his code constantly instead of, y'know, just reading down straight-line code. There is nothing wrong with large single functions as long as their guts are well organized.

Open Closed, Liskov Substitution, and Dependency Inversion Principles: No, fuck off with your inheritance. It might make individual units easier to maintain, but it makes everything else more complicated. Not only are you adding to the endless slog of indirection with dynamic dispatch, you're also ruining your ability to reason about what your code actually does. An interface is not sufficient to tell you what your program is doing because its purpose is to pretend that the implementation details don't matter. This is a debugging and testing nightmare because there are too many moving parts and not enough concrete rules about how they're supposed to interact. If you have to use polymorphism, then odds are a tagged union will serve you better just because their polymorphism is bounded.

Interface Segregation Principle: As a general rule of thumb this one isn't actually bad, but because it's about OOP interfaces rather than interfaces in general it turns into another fucking complexity disaster. The problem with it is that in practice it just means generating more boilerplate abstraction. Especially when you don't even know what's needed, now or in the future, it generates a lot of friction and makes refactoring more difficult because that's just one more useless fucking detail you have to deal with.

Based. "Grad Student" is the best programming language of them all.


I'm actually curious now: what kind of full-scale, large, multi-person software development is even possible without at least one of FP/OOP?

I don't understand this question. What is it about FPOOP that you think helps you build large projects? To be clear, I don't object to the tools they champion. I object to the dogma about how you're supposed to use those tools. As far as I care you can do recursion and bundle behavior with data all you want as long as it's the right tool for the job. If you need a little bit of dogma to guide you, then I suppose a tasteful amount of dependency injection generally works pretty well since it's a straightforward way to write organized code.

Keep in mind when I say this that I'm not vouching for the quality of their code, but there are tons of legacy C and Cobol projects out there that are still used today. Linux is the obvious example, but also SQLite is a thing, MISRA C is used in cars, and there's fuckloads of financial software written in COBOL. It's very possible to write a lot of software without FPOOP.
 
Keep in mind when I say this that I'm not vouching for the quality of their code, but there are tons of legacy C and Cobol projects out there that are still used today. Linux is the obvious example, but also SQLite is a thing, MISRA C is used in cars, and there's fuckloads of financial software written in COBOL. It's very possible to write a lot of software without FPOOP.
Hate to be that guy, but um ackshualley, Linux kernel is in fact structured around OOP principles in many places. I think it's easiest to see in the VFS layer if anyone wants to take a look.
 
I already said I think all of the popular languages are slow pieces of shit. That includes C. What you're arguing right now is: "You're probably not doing the most optimal thing, so that's justification for me to do less optimal things!" That's retarded, and because you don't understand why I'd want to fuck with the implementation details you can't even stand on the blub argument because you're getting blubbed.
No, you're retarded. Implementation details should not mix with whatever you're programming, unless you're programming an implementation. That's the whole point of abstraction.
This separation is the entire reason why I didn't go into how much I fuck with implementation details and enjoy it, and how much I like dealing with performance. You know why? because that shit should be in a library, which is what I do. The inability to keep things separate is a mark of a primitive mind.
 
<fucking nonsense>
Move along kid. The rest of us actually program out in the real world, which involves coding with other programmers. Those layers of indirection that you're bitching about exist exactly so other people can work on the same program independently of you, without having to keep every piece of your code fresh in their mind while they're working on their code. Is it a perfect system? Of course not. But it's leagues better than what you're proposing.

Read up about what the bus factor is sometime, and why your employers (lol 🌈) want to keep it as high as possible. Which means they will always and immediately put the kibosh on faggotry like this:
What I'm doing about this problem is I'm working on my own programming language.
And this:
There is nothing wrong with large single functions as long as their guts are well organized.
And this:
No, fuck off with your inheritance. It might make individual units easier to maintain, but it makes everything else more complicated.
And this.
It's very possible to write a lot of software without FPOOP.

For all of the grief you're giving the FP spergs (some of it rightfully deserved), your head is even higher up your ass.
 
  • Agree
Reactions: Nathan Higgers
and there's fuckloads of financial software written in COBOL
This is the first time I've ever heard that mentioned in support of an argument.
adding to the endless slog of indirection with dynamic dispatch
Are we seriously quibbling over vtable pointer lookups here? If you ask me that's the perfect OOP feature. Convenient when you need it, avoidable if you don't, mostly invisible either way.
 
Single Responsibility Principle: Everyone's already bitched about how vague this is, so I'll leave that point alone. What irks me about SRP is that it advocates for putting as many hoops between the programmer and his work as possible. You can't just chuck a thousand lines of straight-line code into a function and call it a day, you've gotta refactor it so the logic is spread out over as many functions, classes, files, and servers as possible! A single indirection might not make your code noticeably slower, but an endless slog of indirection absolutely will, and even worse it impairs a programmer's ability to reason about his code because to read it he needs to scroll and page through his code constantly instead of, y'know, just reading down straight-line code. There is nothing wrong with large single functions as long as their guts are well organized.

Open Closed, Liskov Substitution, and Dependency Inversion Principles: No, fuck off with your inheritance. It might make individual units easier to maintain, but it makes everything else more complicated. Not only are you adding to the endless slog of indirection with dynamic dispatch, you're also ruining your ability to reason about what your code actually does. An interface is not sufficient to tell you what your program is doing because its purpose is to pretend that the implementation details don't matter. This is a debugging and testing nightmare because there are too many moving parts and not enough concrete rules about how they're supposed to interact. If you have to use polymorphism, then odds are a tagged union will serve you better just because their polymorphism is bounded.

Interface Segregation Principle: As a general rule of thumb this one isn't actually bad, but because it's about OOP interfaces rather than interfaces in general it turns into another fucking complexity disaster. The problem with it is that in practice it just means generating more boilerplate abstraction. Especially when you don't even know what's needed, now or in the future, it generates a lot of friction and makes refactoring more difficult because that's just one more useless fucking detail you have to deal with.
You really should spend some time understanding these things before criticizing them.
SRP is very clear and not vague at all. A responsibility is a 'reason to change': it's about managing state. It doesn't mean anything about classes files or whatever.
Open/Closed is nothing about inheritance. It's about being able to inject new runtime behavior without modifying a structure. Strategy pattern is an example of this, but it can be as simple as a function that takes a lambda in as a parameter.
Liskov is very simple and has mounds of research that following it is a good idea. Essentially, if you're override a function don't put shit that is unexpected in the override.
Dependency Inversion again as nothing specifically about inheritance, in face it's a direct rejection of inheritance! It's the "prefer composition over inheritance" idea that OOP has had for the past 15 years, and if any of you were professionals you'd have heard about it by now. All you're doing is pulling out dependencies inside a class to be created outside that class, and put in as a parameter. This way you have better creation and control of state of dependencies (out of them all, i've seen this one abused the most with dependency injection architectures, but that's more a symptom of insane cult test driven development practices).
Interface Segregation Principle only simply says "hey, if you're a printer interface, make the consumer of your API implement print(), but that other function called calculateInterest(), maybe you should put that somewhere else"

For all you people that criticize OOP and suck FP dick, it's hilarious when you cannot understand some of the best practices involved in OOP for managing state and complexity.
I already said I think all of the popular languages are slow pieces of shit. That includes C. What you're arguing right now is: "You're probably not doing the most optimal thing, so that's justification for me to do less optimal things!" That's retarded, and because you don't understand why I'd want to fuck with the implementation details you can't even stand on the blub argument because you're getting blubbed.
Enjoy being beat to market on every one of your ideas by slow programming languages
 
You really should spend some time understanding these things before criticizing them.
SRP is very clear and not vague at all. A responsibility is a 'reason to change': it's about managing state. It doesn't mean anything about classes files or whatever.
Open/Closed is nothing about inheritance. It's about being able to inject new runtime behavior without modifying a structure. Strategy pattern is an example of this, but it can be as simple as a function that takes a lambda in as a parameter.
Liskov is very simple and has mounds of research that following it is a good idea. Essentially, if you're override a function don't put shit that is unexpected in the override.
Dependency Inversion again as nothing specifically about inheritance, in face it's a direct rejection of inheritance! It's the "prefer composition over inheritance" idea that OOP has had for the past 15 years, and if any of you were professionals you'd have heard about it by now. All you're doing is pulling out dependencies inside a class to be created outside that class, and put in as a parameter. This way you have better creation and control of state of dependencies (out of them all, i've seen this one abused the most with dependency injection architectures, but that's more a symptom of insane cult test driven development practices).
Interface Segregation Principle only simply says "hey, if you're a printer interface, make the consumer of your API implement print(), but that other function called calculateInterest(), maybe you should put that somewhere else"

For all you people that criticize OOP and suck FP dick, it's hilarious when you cannot understand some of the best practices involved in OOP for managing state and complexity.

Enjoy being beat to market on every one of your ideas by slow programming languages
What's funny is there's nothing new or unique in the world. Everything you've laid out exists in FP just as it does in OOP. Composition is one of the best examples - in FP your only objects are data and functions. Data composes arbitrarily, functions compose by definition. They also compose if you implement them in OO, they have one interface method, "call".
DI and O/C are just implemented by parametrising on lambdas and closures (i.e. just local state capture).
Even objects themselves can be thought of as closures. The only difference from FP is if they're mutable or not. Interfaces and polymorphism also don't clash with FP.
Finally, state and other spookiness is just handled via other interfaces. Monads, without bells and whistles, are interfaces for composing computational processes in context. That's all.
That shit converges, just embrace the best of all worlds.
 
What's funny is there's nothing new or unique in the world. Everything you've laid out exists in FP just as it does in OOP. Composition is one of the best examples - in FP your only objects are data and functions. Data composes arbitrarily, functions compose by definition. They also compose if you implement them in OO, they have one interface method, "call".
DI and O/C are just implemented by parametrising on lambdas and closures (i.e. just local state capture).
Even objects themselves can be thought of as closures. The only difference from FP is if they're mutable or not. Interfaces and polymorphism also don't clash with FP.
Finally, state and other spookiness is just handled via other interfaces. Monads, without bells and whistles, are interfaces for composing computational processes in context. That's all.
That shit converges, just embrace the best of all worlds.
Yeah, these are just good sensible rules. Liskov is something a little bit more geared towards OOP, but I could see that still applied to FP, as you still must specify contract for a lambda, curry, etc.
I also think the FP/OOP divide is nonsensible, at least in modern times, since the industry has pretty much agreed that both are good approaches for different problems, and having a combination of the two are usually best (within reason, obviously coding a switching board in Erlang is preferred and accounting software in OOP, etc).
example: O/C using a strategy pattern in modern .net, java or scala since you can just pass in lambda a func<T>
Though it took OOP developers a while, I gradually saw less loops and more elegant uses of list patterns. And most modern python code is nothing but a giant fucking pile of dictionary manipulations.
 
  • Like
Reactions: Considered HARMful
Of all the things I thought could derail a programming thread, "undo button" was not one of them
Such is my power!

What I'm doing about this problem is I'm working on my own programming language. It's also a slow piece of shit right now, but it will get better, and in the mean time it solves a lot of complexity problems for me. For examples: It has no undefined behavior, it has excellent primitives, and the grammar isn't a convoluted mess. This makes it easier to do the kind of programming that I want to do.
I don't care what anybody else says, reinventing the wheel is BASED! and I wish you well. I'm interested though, how are you handling memory management? Based on what you've said so far I presume it's going to be simple C style memory handling

Though it took OOP developers a while, I gradually saw less loops and more elegant uses of list patterns. And most modern python code is nothing but a giant fucking pile of dictionary manipulations.
Python is really just a big dictionary of more dictionaries anyway
 
No, you're retarded. Implementation details should not mix with whatever you're programming, unless you're programming an implementation. That's the whole point of abstraction.
This separation is the entire reason why I didn't go into how much I fuck with implementation details and enjoy it, and how much I like dealing with performance. You know why? because that shit should be in a library, which is what I do. The inability to keep things separate is a mark of a primitive mind.

Why would you ever program something that, by definition, doesn't do anything? You and your ilk are fucking loons.

Move along kid. The rest of us actually program out in the real world, which involves coding with other programmers. Those layers of indirection that you're bitching about exist exactly so other people can work on the same program independently of you, without having to keep every piece of your code fresh in their mind while they're working on their code. Is it a perfect system? Of course not. But it's leagues better than what you're proposing.

Read up about what the bus factor is sometime, and why your employers (lol 🌈) want to keep it as high as possible. Which means they will always and immediately put the kibosh on faggotry like this:

And this:

And this:

And this.


For all of the grief you're giving the FP spergs (some of it rightfully deserved), your head is even higher up your ass.

First up, the second and third examples you quoted are all examples of things that limit the bus factor. Are you so inexperienced that you have not felt the pain of reading overly-factored code, or the pain of malformed implementations violating your code's assumptions? Did you know that Java's actually a somewhat pleasant language to use if you go against the grain and do things as concretely as you can? It's true! You'll write way less useless boilerplate, and the code that's actually important for understanding what your program does won't be spread across the universe. If nothing else, all other things being equal, each line of code is a liability, so a shorter solution is a better solution.

Second, you're arguing from the point of view of a big-ass enterprise shop that's got 40+ code monkeys working on their shit. Not everyone works in those conditions. I've spent most of my career working in small teams where anyone getting hit by a bus would instantly kill the project, so there's no reason at all to worry about writing code to mitigate the bus factor. Additionally, communication is easy in small teams: If you need to know something, you just tap your neighbor on the shoulder. If you need everyone to know something, you just go out to lunch, and there's no need to worry about the telephone game. This is one of the big design considerations of my language: It's meant to be a force multiplier for small teams of competent programmers. That's why it has "dangerous" features like manual memory management, operator overloading, and macros. It's not for wagies in cagies.

This is the first time I've ever heard that mentioned in support of an argument.

Are we seriously quibbling over vtable pointer lookups here? If you ask me that's the perfect OOP feature. Convenient when you need it, avoidable if you don't, mostly invisible either way.

The problem is that it's invisible, which means it piles up really fast. Moreover, this isn't just about the cost of dynamic dispatch vs. the cost of static dispatch; you also have to consider that you do not have the option to inline a function when you use dynamic dispatch. You're not introducing one level of indirection, you're introducing two levels of indirection. This is a problem because you're increasing pressure on the cache, and compilers generally can't reason about optimizations that cross function boundaries. The more abstract you make your code, the worse this gets. Of course inlining every instance of static dispatch can also put pressure on the cache if you do it carelessly, but you get my point: Dynamic dispatch is more expensive than it seems at first glance.

You really should spend some time understanding these things before criticizing them.
SRP is very clear and not vague at all. A responsibility is a 'reason to change': it's about managing state. It doesn't mean anything about classes files or whatever.
Open/Closed is nothing about inheritance. It's about being able to inject new runtime behavior without modifying a structure. Strategy pattern is an example of this, but it can be as simple as a function that takes a lambda in as a parameter.
Liskov is very simple and has mounds of research that following it is a good idea. Essentially, if you're override a function don't put shit that is unexpected in the override.
Dependency Inversion again as nothing specifically about inheritance, in face it's a direct rejection of inheritance! It's the "prefer composition over inheritance" idea that OOP has had for the past 15 years, and if any of you were professionals you'd have heard about it by now. All you're doing is pulling out dependencies inside a class to be created outside that class, and put in as a parameter. This way you have better creation and control of state of dependencies (out of them all, i've seen this one abused the most with dependency injection architectures, but that's more a symptom of insane cult test driven development practices).
Interface Segregation Principle only simply says "hey, if you're a printer interface, make the consumer of your API implement print(), but that other function called calculateInterest(), maybe you should put that somewhere else"

For all you people that criticize OOP and suck FP dick, it's hilarious when you cannot understand some of the best practices involved in OOP for managing state and complexity.

Enjoy being beat to market on every one of your ideas by slow programming languages

Are you stupid? SRP is entirely about how you've organized your code. If a function does a bunch of stuff, I.E. has more than one responsibility/reason to change, then SRP demands that you refactor it. Your code's structure defines how blobs of state are interpreted, so SRP can't be talking about anything else.

For OCP, so what if you use a lambda instead of writing a new implementation for an interface? That barely changes my argument.
LSP might as well be "don't write bugs when writing subtypes". Amazing!
DIP is ultimately about coding to an interface, not to concrete types, which is exactly what I'm railing against here. As you say, this can be taken to an even more absurd degree than what I'm actually arguing against.

I don't like FP or OOP, you retard. Enjoy writing shit software.

Such is my power!


I don't care what anybody else says, reinventing the wheel is BASED! and I wish you well. I'm interested though, how are you handling memory management? Based on what you've said so far I presume it's going to be simple C style memory handling


Python is really just a big dictionary of more dictionaries anyway

Our language does literally nothing to help you manage memory. There's no GC, there's no BC, there's no ARC, and there's no RAII. What our language does do is it stays the fuck out of your way so you can write whatever custom allocators you want.
 
Last edited:
Why would you ever program something that, by definition, doesn't do anything? You and your ilk are fucking loons.
Let me explain it slow so your primitive brain can understand: division of labor and specialization are the basis of civilization. In programming it expresses itself in abstraction, as it does with language in general.
Your question is a walking contradiction.
To program something that by definition does anything you would only every write one function which is a universal Turing machine, because it is the only thing that does everything.
Since you obviously don't do that, your question implies you haven't given the subject serious thought, your technical expertise notwithstanding. Separation, choice and distinction - organization, these are the cornerstones of any construct. What is architecture without them? In software or buildings, these same principles hold true.
I do not want to live in a world of flattened undifferentiated globohomo blob.
Our language does literally nothing to help you manage memory. There's no GC, there's no BC, there's no ARC, and there's no RAII. What our language does do is it stays the fuck out of your way so you can write whatever custom allocators you want.
Implying you can manage memory directly.
It's impossible because x86 does not expose memory management. You have no control over cache levels, register allocation and renaming, it's all already abstracted. Don't you want to do everything?
 
Let me explain it slow so your primitive brain can understand: division of labor and specialization are the basis of civilization. In programming it expresses itself in abstraction, as it does with language in general.

I'm not against abstraction, you mouth breathing simpleton, and I've said that multiple times. I'm against excessive abstraction, the kind of shit that causes people like you to say things like this:

Implementation details should not mix with whatever you're programming, unless you're programming an implementation.

Imagine ever writing something that's not an implementation! What a monumental waste of time that would be.

Your question is a walking contradiction.
To program something that by definition does anything you would only every write one function which is a universal Turing machine, because it is the only thing that does everything.
Since you obviously don't do that, your question implies you haven't given the subject serious thought, your technical expertise notwithstanding. Separation, choice and distinction - organization, these are the cornerstones of any construct. What is architecture without them? In software or buildings, these same principles hold true.
I do not want to live in a world of flattened undifferentiated globohomo blob.

To program something that does something specific I might need a Turing machine. Less capable automata can still be used to perform useful computations. You're playing stupid word games now, and they're not going to work.

Implying you can manage memory directly.
It's impossible because x86 does not expose memory management. You have no control over cache levels, register allocation and renaming, it's all already abstracted. Don't you want to do everything?

Yeah, I do want to do everything. It's not my fault that Intel and AMD are assholes who don't expose the cache to the programmer. In the future I will correct this.
 
Last edited:
I have to say, for all the insults flying back and forth on this, I'm learning quite a bit from this kerfuffle. Some of it is above my head but it's fun to read.

I have a question for the big brains here about exception usage. I've been programming for a few years, I use exceptions plenty and I know how they work. But I have never found a good overview on the best way to approach error handling in a project. Oh, a tonne of articles and instructions on how to throw an exception or make a new type of exception, etc. But whilst I can find a bajillion articles and books on how to use SOLID, how to use OO and on and on... if there's anything that gives a really good set of best practices for exception handling it's lost in the chaff.

I feel I don't even know it well enough to form good questions on it. Off the top of my head:

Should you use Exceptions for process control. I mean, should I be returning false from a method on failure and checking for that or should I be throwing an exception and surrounding calls to the method with try {...} catch {...}. I see a lot of the latter rather than the former.

How far up do you go before you catch and handle an exception. My IDE complains if it finds an Unhandled Exception possibility in a method and... I agree with it. To me, exceptions should be handled then and there but other people seem to think it's fine for a method to throw an exception, that passes up to a calling method which passes up to a calling method which then has a try catch to pick that (or particular types) of exception up and then and only then have some handling.

How and when do you start introducing custom types of exception? Do you have some kind of general exception / error handling class across a whole project? And if so, how do you approach that?

I could go on - it's more of a general lack of good practices than a lack of understanding of how to actually do something. Like I say, if I want to find "this is how you throw an exception" crap, there's no end to it. If I want "composition vs. inheritance" articles, there's a legion (most of them copy-pasting each other). I have no real idea what the best practices and standard approaches are for error handling across a large project nor ever seen a really good book on the subject.
 
  • Like
Reactions: Kosher Salt
@Overly Serious
These questions are too broad to be answered in a sensible time, without going into a ton of caveats at every step. Some of the, let's call it "exception handling policy", depends on the language used.

For example, exceptions should be uncommon in Java, rare in C++ (or outright banned due to Reasons™), but ubiquitous in Python, which basically does control flow via exceptions.

Error handling is also very contextual, and crosses very much the topic of logging. So if an error happens, assuming you don't want to bail out, how do you want the error to be stored or displayed? Logged to file, transmitted via network, displayed as a pop-up modal dialog? What if an error happens during error handling? There is no one general answer.
 
Back