The Linux Thread - The Autist's OS of Choice

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
How does it handle pop up dialogue boxes?
There is a floating mode that can be toggled for any window that is enabled by default for modals. They center in their parent app, and can be resized with Super+R and the arrow keys by default.
The first answer is mostly true, note the "can be". It doesn't handle popups and child windows automatically, you have to filter what you think should have floating behavior by window name or class in i3's config. I couldn't be bothered to do this for everything and using GIMP still felt fucking terrible, so I switched. However, the short time I spent with i3, all its faults aside, was comfy.
 
  • Informative
Reactions: Doctor Neo Cortex
I don't think it's redundant in any way. If you like the general experience of a particular WM but want the features of a different WM than the one the DE provides then it makes sense. It can be a little finicky depending on DE and WM, but that is the nature of things.
 
I use stock Plasma on X11 because it just werks. I also like XFCE but you have to install a compositor separately for the screen tearing.
 
  • Disagree
Reactions: Sperg Coalition
Work related server, using systemd because of fucking course it's using systemd (client refused to sign off on anything that wasn't "standard", whatever that means. At least it's not redhat.)

Server died. Out of drive space. Turns out, systemd has been spamming logs eternally without ever cleaning them up, because that's the default setting. Gigabytes of logs. I have to incant a special configuration to journald to "vacuum" the logs before it will clean up its mess. What happened to simple text logs that you could rotate and purge? Fucking... fuck this stupid cancerous piece of shit software. Fuck poetering and his absolute turd of a mind that came up with this literal waste of space. Every day I grow to hate him more.
 
Work related server, using systemd because of fucking course it's using systemd (client refused to sign off on anything that wasn't "standard", whatever that means. At least it's not redhat.)

Server died. Out of drive space. Turns out, systemd has been spamming logs eternally without ever cleaning them up, because that's the default setting. Gigabytes of logs. I have to incant a special configuration to journald to "vacuum" the logs before it will clean up its mess. What happened to simple text logs that you could rotate and purge? Fucking... fuck this stupid cancerous piece of shit software. Fuck poetering and his absolute turd of a mind that came up with this literal waste of space. Every day I grow to hate him more.
I have a feeling we'll be cleaning up Lennart's messes for years.

Still never seen anybody give a coherent answer why binary log files are a good idea. It narrowly beats the DNS dickery to be the very worst part of systemd, and that's saying something.
 
journald.conf said:
SystemMaxUse= and RuntimeMaxUse= control how much disk space the journal may use up at most. SystemKeepFree= and RuntimeKeepFree= control how much disk space systemd-journald shall leave free for
other uses. systemd-journald will respect both limits and use the smaller of the two values.

The first pair defaults to 10% and the second to 15% of the size of the respective file system, but each value is capped to 4G. If the file system is nearly full and either SystemKeepFree= or
RuntimeKeepFree= are violated when systemd-journald is started, the limit will be raised to the percentage that is actually free. This means that if there was enough free space before and journal
files were created, and subsequently something else causes the file system to fill up, journald will stop using more space, but it will not be removing existing files to reduce the footprint
again, either. Also note that only archived files are deleted to reduce the space occupied by journal files. This means that, in effect, there might still be more space used than SystemMaxUse= or
RuntimeMaxUse= limit after a vacuuming operation is complete.

Broken by design. What awesome software.
 
Work related server, using systemd because of fucking course it's using systemd (client refused to sign off on anything that wasn't "standard", whatever that means. At least it's not redhat.)

Server died. Out of drive space. Turns out, systemd has been spamming logs eternally without ever cleaning them up, because that's the default setting. Gigabytes of logs. I have to incant a special configuration to journald to "vacuum" the logs before it will clean up its mess. What happened to simple text logs that you could rotate and purge? Fucking... fuck this stupid cancerous piece of shit software. Fuck poetering and his absolute turd of a mind that came up with this literal waste of space. Every day I grow to hate him more.
That is what you get when you use systemd distros. Come home white man, reject modernity, embrace tradition.
 
Backups were actually better. Shit like DATs and mini-DATs were common. Now outside of an industrial setting you barely have any choices other than using other hard drives, trash like optical media, etc. It used to be a snap to set up a tape backup with total and incremental backups. You can still get those but not at any reasonable price.

Now one of your best bets is operating out of a data center and having RAID, but as we just saw recently with the Farms, literally the entire RAID can simultaneously fail and you're fucked.
In my workplace we've had at one point maybe 4-5 machines that had a simultaneous failure of all SSDs and all data was lost. In over a decade of experience I've only seen it happen once, it's incredibly rare, but it can happen. You should always assume your existing disks will fail at one point and have another location to backup anything important.

A rather unpopular opinion I have is that there really isn't any reason to run a form of redundancy(e.g RAID) for most home servers taking into consideration that the vast majority of home servers are mainly used to store media in a generally write-once then just read fashion, so basically just torrenting media or saving their own.

For those specific circumstances you rarely have an actual need for redundancy, and it can bring quite some negatives:
* if failure does happen you just rely on the redundancy and don't end up testing your backups often or restoring from backup
* it's extra money that could've been put into backup drives which also raises the entry level cost for someone looking to get into this
* the extra money invested in raid has people often skip proper backups, especially if they're goaded into anything more than RAID1
* the consistent writes to everything raise your chance of disk death across board. If the redundancy disk was used for backups only, its chance of dying is greatly reduced

If you have important work data or things you genuinely need redundancy for, you can just have RAID for those, and use something like mergerfs for your media.

I'm betting the linux wizards here obviously have both and test their backups and restoration processes regularly and are super happy investing in 2x(or 3x/4x) the drives they need + another set for backups, but realistically people who go for raid will just skip proper backups.

I'm not saying redundancy can't be useful, but I have seen very few people have a genuine reason to use it, and I don't think it should just be blanket recommended for absolutely everything. Having a solid backup strategy is far better to prioritize than setting up redundancy. Hard Disks die rarely, but humans are stupid every day. I bet all of us accidentally rm -rf'ed something important at one point or another.
 
Hard Disks die rarely, but humans are stupid every day. I bet all of us accidentally rm -rf'ed something important at one point or another.
Been there, done that. I also accidentally overwrote a large external drive one morning because I was tired, being careless, and dd doesn't even give you a confirmation prompt before doing something potentially stupid.
 
Linux wizard here, I run my entire operating system and store all of my critical files on 16 used thumb drives in a striped configuration, with a 17th for /boot. I sometimes manually back my shit up to #18 using the KDE file manager, usually every three months.
Dood! You are that Wizard from Uruguay that plays his bass backwards right? You are so funny and cool. Can you tell me how to tivo my ebay??? fr no cap SKULL
 
  • Horrifying
Reactions: Heliantheae
In my workplace we've had at one point maybe 4-5 machines that had a simultaneous failure of all SSDs and all data was lost. In over a decade of experience I've only seen it happen once, it's incredibly rare, but it can happen. You should always assume your existing disks will fail at one point and have another location to backup anything important.

A rather unpopular opinion I have is that there really isn't any reason to run a form of redundancy(e.g RAID) for most home servers taking into consideration that the vast majority of home servers are mainly used to store media in a generally write-once then just read fashion, so basically just torrenting media or saving their own.

For those specific circumstances you rarely have an actual need for redundancy, and it can bring quite some negatives:
* if failure does happen you just rely on the redundancy and don't end up testing your backups often or restoring from backup
* it's extra money that could've been put into backup drives which also raises the entry level cost for someone looking to get into this
* the extra money invested in raid has people often skip proper backups, especially if they're goaded into anything more than RAID1
* the consistent writes to everything raise your chance of disk death across board. If the redundancy disk was used for backups only, its chance of dying is greatly reduced

If you have important work data or things you genuinely need redundancy for, you can just have RAID for those, and use something like mergerfs for your media.

I'm betting the linux wizards here obviously have both and test their backups and restoration processes regularly and are super happy investing in 2x(or 3x/4x) the drives they need + another set for backups, but realistically people who go for raid will just skip proper backups.

I'm not saying redundancy can't be useful, but I have seen very few people have a genuine reason to use it, and I don't think it should just be blanket recommended for absolutely everything. Having a solid backup strategy is far better to prioritize than setting up redundancy. Hard Disks die rarely, but humans are stupid every day. I bet all of us accidentally rm -rf'ed something important at one point or another
If you’re not going to have any redundancy, at least use JBOD instead of RAID. A single drive loss in RAID0 kills all the data, JBOD at least you’ll only lose however many whole files from the pool.

Storage is outrageously cheap, at least HDDs. If you’re serious enough to set up a homelab, imo there’s no reason not to double up on drives plus a few extra for a RAIDZ main server and a (slow, super compressed) RAIDZ2 backup server. Writing scripts for something like syncoid is five minutes of work.

Then again I’m kind of a data hoarder. I have dozens of TB of old torrented films I’m probably never going to watch again, and I do back my whole NAS up to a second NAS at my other home, so just ignore my views on these things.
 
Back