The Linux Thread - The Autist's OS of Choice

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
When you say temporary file absolutely zero people would think "oh yeah that includes all my user data and config files" but hey guess what...
That's the sort of crud you accumulate when programs never go feature complete and the goal posts get constantly moved. As systemd expands (and it will certainly keep expanding) it's only going to get worse.

I tried it once a while ago (when it still was relatively limited in scope) but it just felt like it was overloaded with stuff I didn't need nor want and made a lot of things needlessly complicated, like cronjobs. (Yes, I actually think systemd timers are needlessly complicated. Cronjobs are really simple, I just feel noobs throw their hands up at their syntax and then never try again) nor felt I really like I had a good feeling of all it did in the distribution I tried it in. This was a theme with systemd. A lot of over-engineering and paradigm shifts and small gotchas that only applied to systemd and nothing else ever for the sake of paradigm shifting and express wish to NOT do it like other tools do it. Then what actually belonged to the core seemed very subject to change (although I did not stick to it long enough to test that theory out) and frankly, not solid. Then the dev team seemed actively hostile to being questioned about anything ever, and I have learned in my years that this is usually an indicator for poor quality software. (Not something I can objectively prove, just a feel I developed with time) Contrary to the argument that was often brought against people like me, I actually have no interest in tinkering with my linux installation all the time. I want to be able to set up a shell script or crontab file, and then forget about it for the next five years. I think this alone is not something systemd would ever let me do with it constant paradigm shifting and reinventing of the wheel. So that would either lead me to, indeed, constant tinkering or just relying on other people to do it, let's face it, half-assed for me. Not really acceptable.

Now it frankly just looks ridiculous from afar. Hate is a strong word, and I would not seriously say I hate systemd because that'd be equally ridiculous. it just seems like something that would cause me more problems than it would solve, for no good reason other than the hubris of some people thinking the exact way they etched system administration out is the only way it should be done, ever. Well, I just disagree.
 
What even is the purpose to systemd-tmpfiles?
I think Debian, until recently, didn't use tmpfs for /tmp by default, so that'd be one use case, I guess. What's funnier, though, is that there is (was?) a systemd-home that does... something? So now we're in a situation where systemd is so bloated, it's stepping over itself to "mange" your system(d).
 
Yeah cronjobs are fine, but in systemd era how do you handle auto starting & restarting daemons other than systemd? Is there something else I could be using? Genuine question
All I can really think of is just containerizing stuff.
You can still use cron if you want to.
 
quick question:

Right now I have one server which has all my hard drives attached to it, which downloads movies and shows and plays them on jellyfin. it's a bit of a potato and at the limits of it's decoding abilities. I'm having difficulty finding a replacement setup as I want to keep using my existing SAS drives, so I was considering using it solely as NAS storage and having a secondary server just for running jellyfin, along with a windows VM guest with it's own GPU.
Would it be better to get a pcie ethernet port for the new server so I have an ethernet line directly attaching the two? or is it fine just shunting traffic through the router they're both connected to?
 
quick question:

Right now I have one server which has all my hard drives attached to it, which downloads movies and shows and plays them on jellyfin. it's a bit of a potato and at the limits of it's decoding abilities. I'm having difficulty finding a replacement setup as I want to keep using my existing SAS drives, so I was considering using it solely as NAS storage and having a secondary server just for running jellyfin, along with a windows VM guest with it's own GPU.
Would it be better to get a pcie ethernet port for the new server so I have an ethernet line directly attaching the two? or is it fine just shunting traffic through the router they're both connected to?
I like handling my encode/transcode/etc stuff on the same board as my drives. Helps avoid bottlenecks.

GPU isn't the way, intel quicksync is. Get a current gen cheap intel cpu and you can jellyfin more than with a double the price GPU, tested this myself when I built my current plex server.

I personally am waiting to get 10gbps nics & switches before switching to full NAS storage.
That being said, I do use the NAS as a NAS for tube archivist, and those videos work perfectly fine, so you could be okay without 10GBps, just test to find out.
I would get at least a switch between the two to assure connectivity if the router ever dies, and that way you don't need extra nics to connect them together.
 
  • Thunk-Provoking
Reactions: Betonhaus
I like handling my encode/transcode/etc stuff on the same board as my drives. Helps avoid bottlenecks.

GPU isn't the way, intel quicksync is. Get a current gen cheap intel cpu and you can jellyfin more than with a double the price GPU, tested this myself when I built my current plex server.

I personally am waiting to get 10gbps nics & switches before switching to full NAS storage.
That being said, I do use the NAS as a NAS for tube archivist, and those videos work perfectly fine, so you could be okay without 10GBps, just test to find out.
I would get at least a switch between the two to assure connectivity if the router ever dies, and that way you don't need extra nics to connect them together.
Either way I'd be using Intel quick sync for jellyfin, but the original server doesn't even have a GPU. I can put a generic ATX motherboard in my server and use the existing drive bays and SAS controller, but I'd need a custom adapter for attaching an ATX PSU and the server would be offline for a while during the switch. cost for either configuration is about the same.
 
Poettering might be retarded, but imagine if an even bigger retard got control of systemd.
That would be preferable. Pottering is cunning to get his shit jammed in everything, and competent enough to create shit that barely works. A complete retard would force forks of systemd or it getting replaced by various distros.
 
It looks to me like systemd-tmpfiles, at some point, took over management of every volatile file in the system and they never renamed it.

When you say temporary file absolutely zero people would think "oh yeah that includes all my user data and config files" but hey guess what...
Cry more, chud.

Want to know what files aren't temporary?

Just run
systemd-listfilesctl --files-only /usr/bin/systemd*

That's the only files that really belong on your SystemD + Linux system.
It isn't a Linux thread without some useless debate about systemd.
Great point. The only real debate about systemd is whether Poettring should pay for it by:
  1. Stoning
  2. Drawing and quartering
  3. The Brazen Bull
  4. Scaphism
Personally, I would argue that either of the first two methods are overly merciful for what he has done. And frankly, is being eaten alive by worms and insects really bad enough given Poettring's crimes?
Yeah cronjobs are fine, but in systemd era how do you handle auto starting & restarting daemons other than systemd? Is there something else I could be using? Genuine question
All I can really think of is just containerizing stuff.
The way that this used to work, is that long-running services were written properly so that they didn't need to be re-started after they were started in the initial boot.

If they had problems, like crashing or (far more likely, and not addressed by poettringware) gradual memory leaks, sysadmins would either a) fix them b) annoy others to address them c) set up cron scripts or a variety of lightweight monitoring daemons to start or restart (in the case of memleaks) them.

Also, some services were not really 'started' or 'stopped' by themselves at all. Instead whenever a port handled by the inetd superdaemon was connected to, it would just spin off a process for the handler for that port to deal with it. It wasn't always efficient, as the 'web' with 'web pages' that had five hundred different bloody files in them required to load just to load a single page became the main internet service, but Inetd itself never crashed and never got out of control, unlike systemd. A lot of systemd's more useful functionality is just ripped off from inetd implementations.
 
Last edited:
Pretty much any init system that included process supervision I ever played around with always restarted processes automatically by default when they closed for some reason or at the very least had an option to do so. Having processes (re)start at specific times was usually just the matter of creating a symbolic link via script in a cronjob or kill of a pid away, e.g. with runit. Even with the most primitive busybox init you can just comment/uncomment inittab entries via sed and send SIGHUP to PID 1. Make a script of it, two lines if you make it for one specific thing, a few more if you make an universal "toggle process" script. In the beginning, the systemd crowd touted being able to handle complex process dependency as a THE killer feature of systemd and the sole reason all other inits need to be abolished. (do they still? It always sounded completely and utterly dishonest tbh) I've never used daemons that couldn't handle some other thing in the system not being ready gracefully by themselves. I also wouldn't and neither should you. If your network dependant daemon can't recognize there is e.g. no network connection or a mount missing and behave gracefully instead of DoSing your system with crashing/respawning etc. chances are it's shit and needs to be fixed. Who cares if ntpd or some custom, ssh-based "ping" to another system (to launch the nukes) runs into a wall because your wlan connection wasn't ready yet and just needs to try again later? Does that *truly* matter? That said, even barebones init systems like the aforementioned busybox init support running specific tasks before all other ones.

If I really had a situation where I needed one sevice to only start under very specific conditions and never otherwise, the solution still also would only be a short shell script with an if or two away. Having the init system handle intricate dependency graphs is inheriently unnecessary IMO. The tools to do this all already exist in *nix systems. Have existed for decades. They're simple to use, versatile and resource friendly. Maybe these "primitive" solutions are harder to set up, but because they are so simple, they are usually also a lot more robust.
 
If I really had a situation where I needed one sevice to only start under very specific conditions and never otherwise, the solution still also would only be a short shell script with an if or two away. Having the init system handle intricate dependency graphs is inheriently unnecessary IMO. The tools to do this all already exist in *nix systems. Have existed for decades. They're simple to use, versatile and resource friendly. Maybe these "primitive" solutions are harder to set up, but because they are so simple, they are usually also a lot more robust.
Even aside the simple scripts, there is (was?) a whole ecosystem of process and system management/monitoring tools, from personal right up to professional-grade, that automate all of this without ever being part of the init. They manage this control by the revolutionary method of monitoring the process tree and issuing commands to the init or directly issuing signals to processes, and then observing the outcomes and acting accordingly. They can even send alerts when things aren't behaving. Everything systemd does is a duplication of effort, intended to assert control over another aspect of the system.
 
Here's an interesting discussion about how to actually get some kind of notification, as one would have set up with normal methods of monitoring and restarting services, when using poettringware.
While there initially was a good and reasonable way to do this in poettringware, the poettringer then decided to break that by redefining a 'failure' in 2018 from 'a service stopping', to only 'a service stopping and then not being able to be restarted at all'. Fortunately, the servants of poettring were able to work around his failure to understand 'failure' by using.... 40 year old unix shell scripting techniques to leverage a shell script that will now run every a service stops even if it was part of an update or part of a restart to do the recognition of failure inside the shell script because poettring had to make his bloated clinging necrotic octopus even worse. It will work until poettring redefines a service 'stopping' (next year) or running a command (2026).
 
Anyone tried Void with KDE? Work well? Pitfalls with Plasma 6?
 
Anyone tried Void with KDE? Work well? Pitfalls with Plasma 6?
Currently running Void and KDE. Works well for the most point, been running NVIDIA and Wayland with it due to running two monitors with different DPI. Get visual bugs if I let my PC sleep, but a quick logoff/on will fix it.

It was refreshing how Void handles Plasma is they strip down the Plasma-Desktop packages to just installing the very base Plasma system. No Konsole/Dolphin/Konquerer etc, it's been the first distro I've seen that really minimizes the Kshit from default and allows you to add onto it vs vice versa.

Biggest pitfall has been Void has been lagging behind on Plasma 6 updates, I just got 6.0.5 last week. So it'll probably be another week or two till 6.1 is rolled out. Though there seems to already be a pull request for 6.1. NVIDIA drivers have stayed up to date with the stable upstream, so once the 555 hit stable upstream I expect them within a day or two which seem to improve Wayland.
 
Happy 40th Birthday to the X Window System.
June 19, 1984: X Window System

Code:
From: rws@mit-bold (Robert W. Scheifler)
To: window@athena
Subject: window system X
Date: 19 Jun 1984 0907-EDT (Tuesday)

I've spent the last couple weeks writing a window
system for the VS100. I stole a fair amount of code
from W, surrounded it with an asynchronous rather
than a synchronous interface, and called it X. Overall
performance appears to be about twice that of W. The
code seems fairly solid at this point, although there are
still some deficiencies to be fixed up.

We at LCS have stopped using W, and are now
actively building applications on X. Anyone else using
W should seriously consider switching. This is not the
ultimate window system, but I believe it is a good
starting point for experimentation. Right at the moment
there is a CLU (and an Argus) interface to X; a C
interface is in the works. The three existing
applications are a text editor (TED), an Argus I/O
interface, and a primitive window manager. There is
no documentation yet; anyone crazy enough to
volunteer? I may get around to it eventually.

Anyone interested in seeing a demo can drop by
NE43-531, although you may want to call 3-1945
first. Anyone who wants the code can come by with a
tape. Anyone interested in hacking deficiencies, feel
free to get in touch.

 
Back