Programming thread

I have to say, for all the insults flying back and forth on this, I'm learning quite a bit from this kerfuffle. Some of it is above my head but it's fun to read.

I have a question for the big brains here about exception usage. I've been programming for a few years, I use exceptions plenty and I know how they work. But I have never found a good overview on the best way to approach error handling in a project. Oh, a tonne of articles and instructions on how to throw an exception or make a new type of exception, etc. But whilst I can find a bajillion articles and books on how to use SOLID, how to use OO and on and on... if there's anything that gives a really good set of best practices for exception handling it's lost in the chaff.

I feel I don't even know it well enough to form good questions on it. Off the top of my head:

Should you use Exceptions for process control. I mean, should I be returning false from a method on failure and checking for that or should I be throwing an exception and surrounding calls to the method with try {...} catch {...}. I see a lot of the latter rather than the former.

How far up do you go before you catch and handle an exception. My IDE complains if it finds an Unhandled Exception possibility in a method and... I agree with it. To me, exceptions should be handled then and there but other people seem to think it's fine for a method to throw an exception, that passes up to a calling method which passes up to a calling method which then has a try catch to pick that (or particular types) of exception up and then and only then have some handling.

How and when do you start introducing custom types of exception? Do you have some kind of general exception / error handling class across a whole project? And if so, how do you approach that?

I could go on - it's more of a general lack of good practices than a lack of understanding of how to actually do something. Like I say, if I want to find "this is how you throw an exception" crap, there's no end to it. If I want "composition vs. inheritance" articles, there's a legion (most of them copy-pasting each other). I have no real idea what the best practices and standard approaches are for error handling across a large project nor ever seen a really good book on the subject.
Like in everything in life, I think the answer is "it depends"
For example, it depends on the language and runtime at your disposal.
Generally, control flow introduces complexity. Both implicit (as you do with exceptions) and explicit (just control structures). Exceptions are like goto but you don't even know where you're going, just trusting someone will catch you when you fall.
The exact opposite of control flow is logic programming. In it, control flow is abstracted entirely in the runtime and things are just resolved correctly. The price is usually performance, so unless you code is turning into unmaintainable spaghetti, I'd avoid it.
A good middle ground is pattern matching. Languages like Rust, Scala, ML and Haskell allow you to match on both type and value. They're usually compiled to optimal control flow structures.
If you do use exceptions, think about the domain you're working in. It may warrant its own exception type(s). Apache Kafka's java client library throws exceptions with the base class of KafkaException, or just regular java exceptions. A RuntimeException is just what it says on the tin, no need to invent a new one. If it fits the use case, use it instead of a new type.
I agree with you that exceptions should ideally be handled locally, even if handling is just rethrowing it. You should ask yourself if you're programming defensively, though. I came across a file once where every method started with try and ended with catching a general exception. To me it means the programmer didn't really know what they were doing and had no control over their inputs. Try to a line between what you know will succeed (no exceptions needed), what may fail and what should fail. Exceptions are for exceptional circumstances. Sometimes you want the runtime to explode in your face and throw an exception. Sometimes you want to return a "Result" type. Take a hash map lookup, for example. Should a missing key throw, or return a value which indicates a failure?
Exceptions can be nice when they let you escape deep call stacks or structures, but like I said previously, you don't know where you're going. If it's up to the programmer, I'd rather you returned an Option<T> rather then threw an exception.
There are also performance considerations to exceptions, where on the JVM try blocks can't get fully JITed.
 
I have to say, for all the insults flying back and forth on this, I'm learning quite a bit from this kerfuffle. Some of it is above my head but it's fun to read.

I have a question for the big brains here about exception usage. I've been programming for a few years, I use exceptions plenty and I know how they work. But I have never found a good overview on the best way to approach error handling in a project. Oh, a tonne of articles and instructions on how to throw an exception or make a new type of exception, etc. But whilst I can find a bajillion articles and books on how to use SOLID, how to use OO and on and on... if there's anything that gives a really good set of best practices for exception handling it's lost in the chaff.

I feel I don't even know it well enough to form good questions on it. Off the top of my head:

Should you use Exceptions for process control. I mean, should I be returning false from a method on failure and checking for that or should I be throwing an exception and surrounding calls to the method with try {...} catch {...}. I see a lot of the latter rather than the former.

How far up do you go before you catch and handle an exception. My IDE complains if it finds an Unhandled Exception possibility in a method and... I agree with it. To me, exceptions should be handled then and there but other people seem to think it's fine for a method to throw an exception, that passes up to a calling method which passes up to a calling method which then has a try catch to pick that (or particular types) of exception up and then and only then have some handling.

How and when do you start introducing custom types of exception? Do you have some kind of general exception / error handling class across a whole project? And if so, how do you approach that?

I could go on - it's more of a general lack of good practices than a lack of understanding of how to actually do something. Like I say, if I want to find "this is how you throw an exception" crap, there's no end to it. If I want "composition vs. inheritance" articles, there's a legion (most of them copy-pasting each other). I have no real idea what the best practices and standard approaches are for error handling across a large project nor ever seen a really good book on the subject.

Exceptions blow. In toy examples they're fine, but in practice they create situations where functions can exit early by surprise, which is a fantastic way to get unpredictable invalid state. Checked exceptions at least make this more explicit, but they're still not a great solution. Go, I think, is actually one of the few languages that gets this right: Because Go has multiple return it's trivial to return an error code out of a function, and in this way you can just handle errors with normal and predictable control flow. You could also return a tagged union of the output value and an error code, and if your language has exhaustive pattern matching you won't have a choice but to handle errors if you want to do anything with the function's return value.

Lots of people complain about Go error handling being ugly, but at least for now I think it's a decent option.

EDIT: Defer also makes Go error handling nicer because if you early return from a function all of your defers will automatically fire. Just keep in mind that using defer for much else but cleaning up after yourself can be error prone.
 
Last edited:
Exceptions can be nice when they let you escape deep call stacks or structures, but like I said previously, you don't know where you're going. If it's up to the programmer, I'd rather you returned an Option<T> rather then threw an exception.
This is my general sentiment also. In C++ land I prefer to use plain old returns and error codes than exceptions and introducing std::optional and move semantics made such approach a lot more convenient than it was before, when it was more akin to writing C.

I think the only time I'm remotely tempted to throw an exception is deep within a parsing code. And even then I don't believe I'd bother with defining some special type for exception - I can just throw a std::string and be done with it. The temptation usually goes away after a bit of refactoring though.
 
Three separate replies, all saying different things yet all useful. I just wish I could find a good semi-authoritative article or book on this subject. I agree, my question was too vague to be clearly answered but I'm struggling to ask more specific ones because my problem is a dearth of good general context on this subject.

C++ was mentioned. There I seldom used exceptions. I caught them, I handled them but they weren't used per se. They were what the name suggests - an exceptional case I had to prepare for (and did). I wasn't confused about their intent back then. Now lets take a large PHP project. Everywhere I look I see them being used routinely like they're some sort of if else. It's like the entire concept of having some reference error parameter or returning false or null from a method has been replaced by just throwing exceptions.

EDIT: I love the GoTo analogy. Perfect - I think that's partly why I'm uneasy with this.
 
Three separate replies, all saying different things yet all useful. I just wish I could find a good semi-authoritative article or book on this subject. I agree, my question was too vague to be clearly answered but I'm struggling to ask more specific ones because my problem is a dearth of good general context on this subject.

C++ was mentioned. There I seldom used exceptions. I caught them, I handled them but they weren't used per se. They were what the name suggests - an exceptional case I had to prepare for (and did). I wasn't confused about their intent back then. Now lets take a large PHP project. Everywhere I look I see them being used routinely like they're some sort of if else. It's like the entire concept of having some reference error parameter or returning false or null from a method has been replaced by just throwing exceptions.

EDIT: I love the GoTo analogy. Perfect - I think that's partly why I'm uneasy with this.

Another approach to take is to put assertions in your code for situations that should be impossible, and then you use something like AFL to fuzz your code. It doesn't completely replace error checking, but that's alright. The point is to use errors as an opportunity to find bugs by making your program fail fast.
 
  • Thunk-Provoking
Reactions: Overly Serious
Three separate replies, all saying different things yet all useful. I just wish I could find a good semi-authoritative article or book on this subject. I agree, my question was too vague to be clearly answered but I'm struggling to ask more specific ones because my problem is a dearth of good general context on this subject.

C++ was mentioned. There I seldom used exceptions. I caught them, I handled them but they weren't used per se. They were what the name suggests - an exceptional case I had to prepare for (and did). I wasn't confused about their intent back then. Now lets take a large PHP project. Everywhere I look I see them being used routinely like they're some sort of if else. It's like the entire concept of having some reference error parameter or returning false or null from a method has been replaced by just throwing exceptions.

EDIT: I love the GoTo analogy. Perfect - I think that's partly why I'm uneasy with this.
If you're looking for sources, look at IEEE proceedings and papers from the 70s
 
I hope you program exclusively in assembly, then
C Is Not a Low-level Language (exurb) said:
The second optimization, loop unswitching, transforms a loop containing a conditional into a conditional with a loop in both paths. This changes flow control, contradicting the idea that a programmer knows what code will execute when low-level language code runs. It can also cause significant problems with C's notions of unspecified values and undefined behavior.
I do this manually when writing Python. I'll a have a condition that can be run in a loop. Instead will have a two loops with one conditions. It is surprising to read something not written by shit-in-the-streets who don't know anything about computers. Usually the same people who ask on Stackexchange why allocating a large list at once is faster then adding items to a list though the execution of the program are also tech writers.
 
I do this manually when writing Python. I'll a have a condition that can be run in a loop. Instead will have a two loops with one conditions. It is surprising to read something not written by shit-in-the-streets who don't know anything about computers. Usually the same people who ask on Stackexchange why allocating a large list at once is faster then adding items to a list though the execution of the program are also tech writers.
Term rewriting is a fascinating subject
 
Are you stupid? SRP is entirely about how you've organized your code. If a function does a bunch of stuff, I.E. has more than one responsibility/reason to change, then SRP demands that you refactor it. Your code's structure defines how blobs of state are interpreted, so SRP can't be talking about anything else.

For OCP, so what if you use a lambda instead of writing a new implementation for an interface? That barely changes my argument.
LSP might as well be "don't write bugs when writing subtypes". Amazing!
DIP is ultimately about coding to an interface, not to concrete types, which is exactly what I'm railing against here. As you say, this can be taken to an even more absurd degree than what I'm actually arguing against.

I don't like FP or OOP, you retard. Enjoy writing shit software.
This really shows your lack of understanding this principles. 1) they're principles, not commandments. Everything else being equal, they're guidelines to follow. Nothing demands you refactor anything. Just like the tard that refactors every conditional to a factory, you're doing the reverse, and taking the autistic 'all or nothing' approach', without understanding the key part of programming: trade-offs.
My question is then, exactly what is your problem with any of these and what would you replace them with? My criticism of you was you didn't seem to understand any of the principles, and thinking they're all about inheritance really highlighted it.
For OCP: Would you rather have to modify a library everytime you need new functionality? I'm assuming you're still in college, self-taught, or never worked on a large team with many different components. Real-world programming is very complicated, and modifying a base library for new functionality is extremely costly and error prone, not to mention all the other ancillary things like deployment.
DIP: coding to an interface is a means, not an end to this principle. I can also inject concert types into my class. The key concept is about object creation, control, and responsibility of the object.

Lastly, the SOLID principles are a basic novice start to writing better code. A book like the Pragmatic Programmer goes into depth on more principles like tell don't ask, etc which are much more useful, IMO.
Enjoy writing shit software
All software that is useful and complex is shit. Its the nature of the beast and a fact of irreducible complexity.
 
I have a question for the big brains here about exception usage
It depends on what is idiomatic for your programming environment:

Let's say a user entered their email address incorrectly:
  1. In Java, you throw a checked exception
  2. In C#, you return an object which contains validation info and you have outer code to deal with this
  3. In Objective-C, your method assigns an Error object as an out parameter
  4. In PHP you do what your framework does
I like the C# philosophy which is to throw exceptions that are truly exceptional, and by that I mean unpredictable for normal operation. Your database connection disappearing is exceptional. A user writing their email address wrong is just users being users.

When I bounce stuff back in code review I ask - would you want an email each time this happens? If not, then probably it's not exceptional.

Edit: It's also annoying as shit to debug when you use exceptions for program control because you'll keep breaking on events that aren't even interesting.
 
I have to say, for all the insults flying back and forth on this, I'm learning quite a bit from this kerfuffle. Some of it is above my head but it's fun to read.

I have a question for the big brains here about exception usage. I've been programming for a few years, I use exceptions plenty and I know how they work. But I have never found a good overview on the best way to approach error handling in a project. Oh, a tonne of articles and instructions on how to throw an exception or make a new type of exception, etc. But whilst I can find a bajillion articles and books on how to use SOLID, how to use OO and on and on... if there's anything that gives a really good set of best practices for exception handling it's lost in the chaff.

I feel I don't even know it well enough to form good questions on it. Off the top of my head:

Should you use Exceptions for process control. I mean, should I be returning false from a method on failure and checking for that or should I be throwing an exception and surrounding calls to the method with try {...} catch {...}. I see a lot of the latter rather than the former.

How far up do you go before you catch and handle an exception. My IDE complains if it finds an Unhandled Exception possibility in a method and... I agree with it. To me, exceptions should be handled then and there but other people seem to think it's fine for a method to throw an exception, that passes up to a calling method which passes up to a calling method which then has a try catch to pick that (or particular types) of exception up and then and only then have some handling.

How and when do you start introducing custom types of exception? Do you have some kind of general exception / error handling class across a whole project? And if so, how do you approach that?

I could go on - it's more of a general lack of good practices than a lack of understanding of how to actually do something. Like I say, if I want to find "this is how you throw an exception" crap, there's no end to it. If I want "composition vs. inheritance" articles, there's a legion (most of them copy-pasting each other). I have no real idea what the best practices and standard approaches are for error handling across a large project nor ever seen a really good book on the subject.
I'd say exceptions are for when the function can't complete it's task and the correctness of the caller is dependent upon the completion of the task. Special return values should be used for cases that are common enough to be expected.
For example, out of memory exceptions; running out of memory during any particular allocation is generally a very uncommon case, one which the direct caller probably can't even handle, and furthermore the caller can't be expected to manually check every single allocation, so it makes sense to simply throw in the off chance that it occurs.
For an example of return values, concurrent dictionaries vs regular dictionaries; in a regular dictionary it is usually sensible to simply look up a value with an indexer, and throw if the key wasn't found, but in a concurrent dictionary the entries could arbitrarily change at any time due to other threads, thus handling of key-not-found cases is essentially a necessity and a regularity. For this reason concurrent types usually favor a TryGetValue pattern that returns an option, as handling those cases becomes an expectation rather than an exception.

Our language does literally nothing to help you manage memory. There's no GC, there's no BC, there's no ARC, and there's no RAII. What our language does do is it stays the fuck out of your way so you can write whatever custom allocators you want.
Are you going to have some kind of compile time interface system? If I might pester you with suggestions, I think you should. I'm talking something to enable statically specialized generic algorithms, which could enable future users of your language to create their own RAII or GC if they wished.

Yeah, I do want to do everything. It's not my fault that Intel and AMD are assholes who don't expose the cache to the programmer. In the future I will correct this.
Bold!
 
  • Agree
Reactions: Overly Serious
This really shows your lack of understanding this principles. 1) they're principles, not commandments. Everything else being equal, they're guidelines to follow. Nothing demands you refactor anything. Just like the tard that refactors every conditional to a factory, you're doing the reverse, and taking the autistic 'all or nothing' approach', without understanding the key part of programming: trade-offs.

No, I'm not taking an "all or nothing" approach. I'm saying that these principles don't have any special value that makes them worthy of being principles or guidelines. They come with far too many downsides to be the default ways to write software, and yet people champion SOLID and its accompanying habits as the word of god. When people ruminate about their failures to write decent software, they don't stop and question if SOLID is the reason why they failed: They reaffirm their commitment to a dogma that, as far as I'm concerned, has never proved itself useful. If something you're doing happens to look like a SOLID shit, and that's the right design for your problem, then that's fine. My issue is with dogma, and if you bothered to understand the context of this conversation before mouthing off at me you'd know that.

My question is then, exactly what is your problem with any of these and what would you replace them with? My criticism of you was you didn't seem to understand any of the principles, and thinking they're all about inheritance really highlighted it.
...
DIP: coding to an interface is a means, not an end to this principle. I can also inject concert types into my class. The key concept is about object creation, control, and responsibility of the object.

Inheritance is the classic way to do this shit. Trying to separate inheritance from OOP is like trying to render the fat from your mom. In recent years OOP languages have tried to spackle over their messes by yoinking concepts from Haskell and other FP languages, and I suppose that's nice, but it doesn't change much.

I wouldn't replace SOLID with anything. I don't like dogma in engineering disciplines, so why the fuck would I replace one dogma with another? I prefer to accept the reality that you cannot understand a problem until you solve it at least once. Your first prototype is going to be garbage, and that's OK. You can either refactor it or chuck it and start again knowing better how to solve your problem. You can do a lot of work very fast this way because the point is to learn about your problem, not to write quality software. You can write code as sloppy as I left your sister last night, and it's fine. Once you understand your problem well enough you'll know how best to write your software, and you'll be able to do it quickly. Other engineering disciplines understand this. Writers understand this. What the fuck is wrong with programmers that all of you retards think you can get away with publishing rough drafts if you just follow 5 easy steps? Reality doesn't work that way.

For OCP: Would you rather have to modify a library everytime you need new functionality? I'm assuming you're still in college, self-taught, or never worked on a large team with many different components. Real-world programming is very complicated, and modifying a base library for new functionality is extremely costly and error prone, not to mention all the other ancillary things like deployment.

And I'm assuming as a junior you got brow beat by an ideologically motivated senior until your spirit broke, and now you're just like him. What a shame.

Libraries are a significantly more difficult engineering challenge than making software that just does what you need. This is because libraries are usually meant to cover a wide range of use cases, and programmers need to be able to effectively use libraries without having the domain knowledge that was gained by implementing them. Those are difficult design criteria to meet, and odds are your software doesn't need to do any of that. Modularity, as nice as it can be, generally causes complexity explosions. If your software doesn't need to be modular, then you'll save yourself time and trouble by not preemptively making it modular.

The point of OOP and SOLID, actually, is to solve the Expressiveness problem, which is about modular software. I expect even you can see why I'm so down on this whole thing after everything I've said.

And yes, I would modify a library if it were the best way to do what I need.

Lastly, the SOLID principles are a basic novice start to writing better code. A book like the Pragmatic Programmer goes into depth on more principles like tell don't ask, etc which are much more useful, IMO.

Look at everything I've explained about why I don't like OOP and SOLID. Look at what I've said about how I hate that you fuckers are spreading dogma and fear to the younger generations and crippling their ability to write performant software. I don't just disagree with you, here, man. I find you revolting.

All software that is useful and complex is shit. Its the nature of the beast and a fact of irreducible complexity.

"Writing software is haaard! Therefore it's OK for me to write bad software!"

ok buddy

Are you going to have some kind of compile time interface system? If I might pester you with suggestions, I think you should. I'm talking something to enable statically specialized generic algorithms, which could enable future users of your language to create their own RAII or GC if they wished.

We already have comptime code execution and some basic metaprogramming functionality, and defer is on the roadmap for our bootstrap compiler. Long term we want to allow arbitrary AST manipulation, so eventually you should be able to define whatever you need. One of the things that I'm most excited about is playing around with my own instrumentation. I really want to make a fuzzer, for example.

In other news about our language I spent some time playing with ray marching the other day. I wrote a shitty CPU renderer and made this:

glow_map.png


When I say shitty, I really do mean shitty. That's a 500x500 pixel image, and it takes 80 ms to render. Right now I'm looking into how to use SIMD instructions to make this not painfully slow.
 
Last edited:
When people ruminate about their failures to write decent software, they don't stop and question if SOLID is the reason why they failed
Yet you seem to do the same, blaming it's why they failed.

A good rule that program managers and any one running a software firm has to learn is "good developers will write good code in spite of, not because of programming methadology". During the TDD craze, I saw early programmers jump to it and do good work, because they were good programmers that sought out new ideas, then as it progressed, I saw the standard shit programmer fuck it up. The deciding factor between good code and not is not simply a set of fundamentals, but is the skill developer. You seem to miss every time i say "all things being equal".

One of my favorite articles on this is:

I wouldn't replace SOLID with anything. I don't like dogma in engineering disciplines, so why the fuck would I replace one dogma with another? I prefer to accept the reality that you cannot understand a problem until you solve it at least once. Your first prototype is going to be garbage, and that's OK. You can either refactor it or chuck it and start again knowing better how to solve your problem. You can do a lot of work very fast this way because the point is to learn about your problem, not to write quality software. You can write code as sloppy as I left your sister last night, and it's fine. Once you understand your problem well enough you'll know how best to write your software, and you'll be able to do it quickly. Other engineering disciplines understand this. Writers understand this. What the fuck is wrong with programmers that all of you retards think you can get away with publishing rough drafts if you just follow 5 easy steps? Reality doesn't work that way.
They're not dogma: It sounds like you're the one that was browbeat by your tech lead in a some juvenile show of force, and you're inflicting your PTSD on all of us. Anyone that understands these things know that applying every principle all the time is like overengineering your code like the standard shit enterprise java megafuck.
Your comments are classic midwit thinking: smart enough to do things, but not smart enough to realize that people before have paved a path and have lessons to synthesize. Your prototype idea is shit and shows you've not really worked in the industry for long, or at least on hypercompetive products where you don't have the resources or time to build one to throw away. Given the constraints on money and the world, basic principles are a general good rule of thumb. Reminds me of the guitarists that don't want to learn music theory because "rules man, fuck them, i don't need rules in my music", so they go and basically reinvent things other people have done, and think they're the genius. I've built 5 startups and sold them, which is why I have time to talk to chucklefucks like you on the boards. I was like you, but I ended up working with people that were smarter than me and knew when to listen to an idea, digest it, and learn from it.

This is because libraries are usually meant to cover a wide range of use cases, and programmers need to be able to effectively use libraries without having the domain knowledge that was gained by implementing them. Those are difficult design criteria to meet, and odds are your software doesn't need to do any of that.
This is why there's things like the Open Closed principle.

Look at everything I've explained about why I don't like OOP and SOLID. Look at what I've said about how I hate that you fuckers are spreading dogma and fear to the younger generations and crippling their ability to write performant software. I don't just disagree with you, here, man. I find you revolting.
That's me, I'm the motherfuck terror of younger coders. You can paste a picture of your dad on my profile and yell at me. Get it all out.

Also, lol...u mad?
 
I have to say, for all the insults flying back and forth on this, I'm learning quite a bit from this kerfuffle. Some of it is above my head but it's fun to read.

I have a question for the big brains here about exception usage. I've been programming for a few years, I use exceptions plenty and I know how they work. But I have never found a good overview on the best way to approach error handling in a project. Oh, a tonne of articles and instructions on how to throw an exception or make a new type of exception, etc. But whilst I can find a bajillion articles and books on how to use SOLID, how to use OO and on and on... if there's anything that gives a really good set of best practices for exception handling it's lost in the chaff.

I feel I don't even know it well enough to form good questions on it. Off the top of my head:

Should you use Exceptions for process control. I mean, should I be returning false from a method on failure and checking for that or should I be throwing an exception and surrounding calls to the method with try {...} catch {...}. I see a lot of the latter rather than the former.

How far up do you go before you catch and handle an exception. My IDE complains if it finds an Unhandled Exception possibility in a method and... I agree with it. To me, exceptions should be handled then and there but other people seem to think it's fine for a method to throw an exception, that passes up to a calling method which passes up to a calling method which then has a try catch to pick that (or particular types) of exception up and then and only then have some handling.

How and when do you start introducing custom types of exception? Do you have some kind of general exception / error handling class across a whole project? And if so, how do you approach that?

I could go on - it's more of a general lack of good practices than a lack of understanding of how to actually do something. Like I say, if I want to find "this is how you throw an exception" crap, there's no end to it. If I want "composition vs. inheritance" articles, there's a legion (most of them copy-pasting each other). I have no real idea what the best practices and standard approaches are for error handling across a large project nor ever seen a really good book on the subject.
It all depends on where in your program you want to redirect the control flow of your program after an error. If the function that encounters an error should handle it, then do it there in the catch. Else you can have the function return to throw an exception and let the calling scope direct the control flow.

In terms of what an error should be, a class, a string, a enum, an int, ... it all really depends on what you want and what's done in the language and what's your taste. C is int, but some code bases use -1 for an error, others use 0, others use use 1 which is probably for the best. Before I got into C I thought is was backward that int main returned 0 which is C boolean for false. In C you want to catch errors by asking if the function failed, so you can have something like this.
C:
#include <stdio.h>

int main()
{
sometype t;
if (sometype_init(t)) {
fprintf(stderr,"That when wrong\n");
return 1;
}
sometype_add(t,5);

return 0;
}
if 0 is false and 1 is true, what a function returns is weather it had an error. True there was an error. False there was no error. Java's catch looks much the same.
Java:
class test {
    public static void main(string[] args)
    {
        sometype t = new sometype();
        try {
            t.init(); // this throws
            t.add(5);
        } catch (Exception e) {
            System.err.println("That when wrong")
            // Alternatively you could have
            // e.printerror();
            // In the case that you use a custom class
            // that gets returned as the Exception.
        }
    }
}

The question you want to ask your self when picking a style, class, enum, or string, is how do you want to organise your error messages. You could have a class that inherits Exception and have subsequent child classes that are customised for different section of your program; how ever you want it structured.

In the tranny dumpster fire formerly known as the Rust Programming Language the standard library has enums that have Error implemented for them, which all it does is print a debug message to fulfill the needs of the Trait. There are different enums for different section of std. So when you create you own library, it carries that you would have a enum variant that represents the different kinds of errors that can happen.
 
  • Informative
Reactions: Overly Serious
Should you use Exceptions for process control. I mean, should I be returning false from a method on failure and checking for that or should I be throwing an exception and surrounding calls to the method with try {...} catch {...}. I see a lot of the latter rather than the former.
That depends if the condition that caused it to return false should be of concern to you. If not then it's safe to return false and forget about it. You generally shouldn't use a catch at all (with the exception being you want to give extra information inside the catch block) unless the program has to recover from the error in some way.
How far up do you go before you catch and handle an exception. My IDE complains if it finds an Unhandled Exception possibility in a method and... I agree with it. To me, exceptions should be handled then and there but other people seem to think it's fine for a method to throw an exception, that passes up to a calling method which passes up to a calling method which then has a try catch to pick that (or particular types) of exception up and then and only then have some handling.
It really depends on where you want the exception to be handled. If you were creating a rest api then you would have a single top level exception handler at the route handler that will catch, log the error (if it's something unexpected) and then respond to the request with an error message based on the thrown exception. If you're running a one and done type of application then you usually throw an error, log that error (or a descriptive message of the error) and exit immediately. If it's an application that has to recover then you take that into consideration.
How and when do you start introducing custom types of exception? Do you have some kind of general exception / error handling class across a whole project? And if so, how do you approach that?
I usually don't bother unless I want to expand the exception to hold more information. For instance when I wrote a rest api I would return an error code and a message if the request failed, so I would make a new CodedException class which took an enum as a parameter which held those values. This could've been achieved with using a class per error type but there were about 30 different types. Usually you'll use a general type like IllegalArgument or IllegalState in Java and those are good.
 
Started reading some papers on Haskell's Core language compiler. Pretty interesting and makes me itch to try to apply it in other places
 
Back