I'd argue the Red Hat buyout was a important move for IBM. They've been trying to ditch the image they built up for decades of overpriced proprietary black boxes and crappy servers (everyone I've known in the IT field hates IBM with a passion) for a while now and have been doubling down as of late on open products, from the OpenPower group (which has led to a fully FOSS workstation based on the IBM Power9 CPU) to doubling down on Linux support for PowerPC with the new ppc64el port that allows x86 oriented little endian code to be ported easier.
It'll be interesting to see how it pans out, it'll either be a success for IBM or it'll result in more control of Linux switching away from Red Hat to Canonical and Google.
I hope it dies now.
They have a habit of pushing broken shit and letting the community fix it for them.
And after "contributions" like HAL, dbus, PolicyKit, ConsoleKit, systemd, PulseAudio, Gnome, and developers like Drepper and Poettering they deserve it.
I hope it dies now.
They have a habit of pushing broken shit and letting the community fix it for them.
And after "contributions" like HAL, dbus, PolicyKit, ConsoleKit, systemd, PulseAudio, Gnome, and developers like Drepper and Poettering they deserve it.
I hope it dies now.
They have a habit of pushing broken shit and letting the community fix it for them.
And after "contributions" like HAL, dbus, PolicyKit, ConsoleKit, systemd, PulseAudio, Gnome, and developers like Drepper and Poettering they deserve it.
As much as I dislike Poetteringware and even their attempts to push crap into the kernel, pulseaudio got "kind of" better after he dropped it and other picked up the pieces. There's still no reason to use it. You can do pretty much all you want with ALSA (including streaming over network) the only thing I miss with ALSA is good documentation. Honestly in my observation, as soon as a user gets pipes and the fact that *everything* in Linux is a file, the need for a lot of software like this just vanishes.
I think a lot of users come from the Windows black-box GUI way of doing things and they kinda search out such software when they switch to Linux (and lets face it, nobody starts with Linux) because it feels familiar. Then they just stick with it. The bullshit about this software (and Windows) is that you have to learn the very specific way it does things that's not applicable to anything else. Then you learn all the quirks and tricks and become a "systemD guru" which is a thing that shouldn't exist and basically is just proof that the software failed at being good. The "Unix philosophy" way of doing things is very powerful, use simple tools that do one thing and chain them together to accomplish. These overtly complicated and monolithic solutions will never be able to live up to it. Of course you have to know what you are doing, which many people just don't, they then try to hide their insecurity by saying nonsensical things like using software that's so old (a lot of the classic, simple tools have a long history) is "obsolete" for some non-specific reason and shit like that. You can weed them out pretty quickly.
As much as I dislike Poetteringware and even their attempts to push crap into the kernel, pulseaudio got "kind of" better after he dropped it and other picked up the pieces. There's still no reason to use it. You can do pretty much all you want with ALSA (including streaming over network) the only thing I miss with ALSA is good documentation. Honestly in my observation, as soon as a user gets pipes and the fact that *everything* in Linux is a file, the need for a lot of software like this just vanishes.
I think a lot of users come from the Windows black-box GUI way of doing things and they kinda search out such software when they switch to Linux (and lets face it, nobody starts with Linux) because it feels familiar. Then they just stick with it. The bullshit about this software (and Windows) is that you have to learn the very specific way it does things that's not applicable to anything else. Then you learn all the quirks and tricks and become a "systemD guru" which is a thing that shouldn't exist and basically is just proof that the software failed at being good. The "Unix philosophy" way of doing things is very powerful, use simple tools that do one thing and chain them together to accomplish. These overtly complicated and monolithic solutions will never be able to live up to it. Of course you have to know what you are doing, which many people just don't, they then try to hide their insecurity by saying nonsensical things like using software that's so old (a lot of the classic, simple tools have a long history) is "obsolete" for some non-specific reason and shit like that. You can weed them out pretty quickly.
Because init is a bunch of shell scripts thrown together with decades of historical cruft, with unnecessary compatibility differences between distros, and starts serially
Unix pipelines are usable in small situations. Admittedly, many, many small situations.
But they're rickety and shitty for serious user interfaces.
I mean, if we want to go to the ultimate conclusion of the unix philosophy, you'd go for plan 9. It's a neat gimmick but it doesn't scale.
Plan9 was arguably destined for failure for being too scaler, at a time when no one was looking for that.
Here we are running what began as a timesharing mainframe OS on single user computers, whereas Plan9 was designed from the start to be a fully distributed system where users sit down at limited graphical terminals which offload all file storage and expensive computation to dedicated servers on the network. Everything is a file to a far greater degree than Unix and all files, real and virtual, are delivered via a completely network transparent protocol. New services are trivial to write and add to the network, and you can be sure that they'll share the same fundamental user interface as everything else.
Plan9 is what you get when experienced developers sit down to design a modern, scaleable system from the ground up, saying it was ahead of its time isn't really appropriate because we're not there yet.
Plan9 was arguably destined for failure for being too scaler, at a time when no one was looking for that.
Here we are running what began as a timesharing mainframe OS on single user computers, whereas Plan9 was designed from the start to be a fully distributed system where users sit down at limited graphical terminals which offload all file storage and expensive computation to dedicated servers on the network. Everything is a file to a far greater degree than Unix and all files, real and virtual, are delivered via a completely network transparent protocol. New services are trivial to write and add to the network, and you can be sure that they'll share the same fundamental user interface as everything else.
Plan9 is what you get when experienced developers sit down to design a modern, scaleable system from the ground up, saying it was ahead of its time isn't really appropriate because we're not there yet.
I think the difference between plan 9 and traditional unix/linux is more of an user interface issue than a performance issue.
That is, the things you're describing would run just fine on a desktop and I think they have still provide benefits to the user/programmer, even without taking advantage of the networking aspects.
It's been awhile since I've fucked with plan 9, but from what I remember, it's a good example of what you get when you file down all the edges and really bring unix (even single machine unix) to its logical conclusion. Everything is available through plain text, filesystem interfaces. The namespace was given lots of attention and it made extensive use of union mounts, a long time before docker made it popular for the unwashed masses.
I think the holy grail design for a plan 9 program would be for the user to construct (through scripts or by hand or whatever) its own platonic ideal of how it should see the root filesystem, with the necessary APIs mounted in (like /net or /audio or whatever), and the program would live in its little sandboxed world.
That's taking the unix philsophy to its logical conclusion, regardless of its implications for networking.
Linux still has a lot of system call interfaces to things, things that aren't directly accessible through read/write.
But the thing is, I don't really find that to be a negative. It's really missing a lot of essential things. I think people get caught up in the fun of solving little pipeline problems (ie how do I parse output from ls to get this field, and then convert it to this, blah blah blah) and they lose sight of some important considerations:
how maintainable is it? it's brittle as shit, right? how many little tools like sed, awk, tr did you use for it?
how's it going to break when you move to a different version of one of those tools?
if you had to write a bash script for this, what happens if the first process dies? does the whole script die? (I don't like having to quiz myself on goofy corner cases for shitty shell languages)
I think the problem is that people try to encode logic in shell scripts and pipelines that are too complex for those tools. grep, awk, tr, perl, the different shell languages, were all invented to just barely stretch the utility of past unix tools to cover a new use. All with different regexps, for example.
I think one big problem with getting too creative with unix pipelines, is that their native datatype is just a char. It forces each tool author to haggle between user readable formats or parseable formats (and then what kind of parseable format?).
It'd be neat if higher level datatypes, maybe something simple like JSON (binary-wise, not text), were used. If it's going to a tty, it just gets dumped as readable JSON, but if it's going to another process, it gets marshaled in a binary format. Then there could be small pipeline utilities like array_ref, obj_ref, etc. You're still doing pipeline stuff, but on a higher level.
Because init is a bunch of shell scripts thrown together with decades of historical cruft, with unnecessary compatibility differences between distros, and starts serially
Bit of a misconception. The actual init process (i.e. PID 1) in SysV/BSD is just a tiny binary written in C.
The init scripts are for service management and are much less critical, you could replace those without replacing the actual init (e.g. daemontools, runit). They have a reputation for being a mess largely because of the shitty SysV-style, distros like Debian and Red Hat used.
It's not necessarily serial either. You can run services in background - that was especially easy in Arch.
Plus you had stuff like xinetd that runs services when requested.
Still, they're far from perfect and systemd wasn't the first one that sought to replace them with a fancier solution.
if anything, I find bash awkward for larger things and I wouldn't agree that pipelines are fidgety, you just have to use them in the right places. For complicated scripting I don't use bash, simply because bash scripts tend to get unreadable for me at a certain length and I have no fucking idea what I tried to do anymore a week later. There's nothing stopping you from using any other language. I use lua for complicated scripts for example. I use pipelines and the Unix way for everything from my desktop to stuff on my smartphone. (for example, showing the battery charging status of my smartphone in the taskbar of my desktop when it is in range) accomplishing inter-process (and -machine) communication with tools like socat&ssh and named pipelines residing for most purposes in /tmp, which are pretty much then used by the processes as message queues, you can even go fancy and do things like threading with them, although it's not all that performant. Even went the extra mile with ownership permissions and different users for some added security. It's fairly robust and incredibly flexible for me. Then again I do this for fun, if I'd had to administer hundreds of machines I'd probably take some tool that doesn't make my life too difficult and just run with it, mostly because I'd probably not be well enough paid to come up with a custom solution. Thankfully I don't have to, I think it'd kill what fun I have fairly quickly.
I never understood the complicated solutions systemD offers, they are all for problems I never had. On a simple desktop machine the only processes that are dependent on something having started before them are usually network-dependent processes and I haven't seen a single daemon in at least a decade who doesn't handle network failure at least somewhat gracefully, so it usually doesn't even really matter if the network is up when it starts. Hell, even the humongous monster that is OpenVPN can handle even shitty network somewhat well. Even if you don't handle process dependency, you can easily at least set the network up first.
Also, if your demons end themselves in various ways all over the place all the time, you probably have bigger problems than the matter of having to restart them automatically in the right order, too. I find all the solutions around completly overwrought for most use cases, and in the few use cases where simpler solutions are actually too simple, it's probably better to build something custom than having to deal with all the specific shit you need to know for an init system like systemd. Overengineered is the keyword. I do get that there are many different machines and usage scenarios out there and also users that can't handle all this, and they need fancier solutions that do things with minimal interference in different use cases. You'll never get elegance with such an solution, though.
if anything, I find bash awkward for larger things and I wouldn't agree that pipelines are fidgety, you just have to use them in the right places. For complicated scripting I don't use bash, simply because bash scripts tend to get unreadable for me at a certain length and I have no fucking idea what I tried to do anymore a week later. There's nothing stopping you from using any other language. I use lua for complicated scripts for example. I use pipelines and the Unix way for everything from my desktop to stuff on my smartphone. (for example, showing the battery charging status of my smartphone in the taskbar of my desktop when it is in range) accomplishing inter-process (and -machine) communication with tools like socat&ssh and named pipelines residing for most purposes in /tmp, which are pretty much then used by the processes as message queues, you can even go fancy and do things like threading with them, although it's not all that performant. Even went the extra mile with ownership permissions and different users for some added security. It's fairly robust and incredibly flexible for me. Then again I do this for fun, if I'd had to administer hundreds of machines I'd probably take some tool that doesn't make my life too difficult and just run with it, mostly because I'd probably not be well enough paid to come up with a custom solution. Thankfully I don't have to, I think it'd kill what fun I have fairly quickly.
I never understood the complicated solutions systemD offers, they are all for problems I never had. On a simple desktop machine the only processes that are dependent on something having started before them are usually network-dependent processes and I haven't seen a single daemon in at least a decade who doesn't handle network failure at least somewhat gracefully, so it usually doesn't even really matter if the network is up when it starts. Hell, even the humongous monster that is OpenVPN can handle even shitty network somewhat well. Even if you don't handle process dependency, you can easily at least set the network up first.
Also, if your demons end themselves in various ways all over the place all the time, you probably have bigger problems than the matter of having to restart them automatically in the right order, too. I find all the solutions around completly overwrought for most use cases, and in the few use cases where simpler solutions are actually too simple, it's probably better to build something custom than having to deal with all the specific shit you need to know for an init system like systemd. Overengineered is the keyword. I do get that there are many different machines and usage scenarios out there and also users that can't handle all this, and they need fancier solutions that do things with minimal interference in different use cases. You'll never get elegance with such an solution, though.
In production, things are kind of different. systemD is useful for handling the reality of production servers, because you're almost always dealing with a dependency graph of services. systemD is (one) way of doing that properly. I don't think init scripts are though. (Like, a very typical webserver setup is nginx -> python/ruby/php -> mysql, along with python/ruby/php -> redis).
You'll also want to easily scale up number of language processes and there needs to be sane ways to handle failure. These are off-the-shelf components developed by a handful of brilliant people alongside thousands of mediocre people. Failure is inevitable. You'll need to easily diagnose failures and figure out if it's anomalous or if it's reoccurring, so you'll probably want syslog.
An alternative to systemD is docker-compose / docker swarm, which I'm a big fan of because it doesn't litter all over the place, so it's really easy to secure filesystem permissions. You pretty much just don't bother, because everything is sandboxed.
Writing a custom solution is fun while it's fresh in your mind, but not when it shits the bed and you just want to go home for the weekend.
Heh, also, I'm moving the cwcki server and I'm watching the upload progress with this:
I'm not sure why you're surprised. When you get down to it, the amount of operations you can actually use and still be POSIX compliant is relatively miniscule. It's always annoyed me that (seemingly) the only reason that shell script syntaxes (bash vs dash vs zsh) aren't interchangeable is because they all decided on different ways to do stuff that POSIX compliance doesn't cover.
I'm not sure why you're surprised. When you get down to it, the amount of operations you can actually use and still be POSIX compliant is relatively miniscule. It's always annoyed me that (seemingly) the only reason that shell script syntaxes (bash vs dash vs zsh) aren't interchangeable is because they all decided on different ways to do stuff that POSIX compliance doesn't cover.
I'm not sure why you're surprised. When you get down to it, the amount of operations you can actually use and still be POSIX compliant is relatively miniscule. It's always annoyed me that (seemingly) the only reason that shell script syntaxes (bash vs dash vs zsh) aren't interchangeable is because they all decided on different ways to do stuff that POSIX compliance doesn't cover.
In production, things are kind of different. systemD is useful for handling the reality of production servers, because you're almost always dealing with a dependency graph of services. systemD is (one) way of doing that properly. I don't think init scripts are though. (Like, a very typical webserver setup is nginx -> python/ruby/php -> mysql, along with python/ruby/php -> redis).
Web developer here. Maybe there's a case where it makes sense for there to be a dependency tree for init stuff, but this really isn't a good example. Let's take the case of a simple web server with Apache, PHP, and MySQL, and all three are ordered to start up. All three communicate with each other via sockets (or via a local network connection, but sockets are smarter), and if the socket connection doesn't work, either because the other part hasn't started yet or because it crashed, the parts that are up can still work and perhaps even be useful to site visitors, especially if there's redundant caching frameworks in place. I would personally find it very weird if I tried to fire up a web stack and the whole thing fails just because one part of it failed.
"POSIX-compliant" OSes don't mean that they comply to POSIX and nothing more. macOS is POSIX-compliant (IIRC) and it has GNU stuff in it too; bash is its default shell now. Generally speaking, the BSDs don't have GNU stuff in their base distributions due to licensing concerns, but installing GNU tools, or the GNU versions of base tools, is just a matter of grabbing them with the package manager.