- Joined
- Feb 9, 2013
In an ideal world, having big chunks of the stack fail is preferable because it signals the failure. It's only because it was a fact of life in the past that you frequently had to just endure a lot of failures, that so much software is designed to smother failures.Web developer here. Maybe there's a case where it makes sense for there to be a dependency tree for init stuff, but this really isn't a good example. Let's take the case of a simple web server with Apache, PHP, and MySQL, and all three are ordered to start up. All three communicate with each other via sockets (or via a local network connection, but sockets are smarter), and if the socket connection doesn't work, either because the other part hasn't started yet or because it crashed, the parts that are up can still work and perhaps even be useful to site visitors, especially if there's redundant caching frameworks in place. I would personally find it very weird if I tried to fire up a web stack and the whole thing fails just because one part of it failed.
You can always add an additional nginx layer to obscure it from the end user.
Like I do API development and we run the full test suite in CI. It's counterproductive to mask the infrastructure errors in production.
The interface is comfier and users end up relying on it without documenting it."POSIX-compliant" OSes don't mean that they comply to POSIX and nothing more. macOS is POSIX-compliant (IIRC) and it has GNU stuff in it too; bash is its default shell now. Generally speaking, the BSDs don't have GNU stuff in their base distributions due to licensing concerns, but installing GNU tools, or the GNU versions of base tools, is just a matter of grabbing them with the package manager.