The Linux Thread - The Autist's OS of Choice

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
I never had a issue with the formatting of my HDs or SSDs. Ever. But now I am sp00ked hearing about possible issues. How would I detect that my SSD is about to be kill like that?

Might want to swap it to brfts in the future if it is a issue.

Manjaro or Arch. Never used arched, was a big Ubuntu user for a long time but I don’t wanna go back to ubuntu because I know how shitty it is with gnome.

is there any big time investment to get into arch vanilla? Does majaro really help or is it just a meme?

Manjaro was good a year ago and two years ago it was the go-to for starting out on Arch distros. Nowadays it is a meme. Devs have gotten really fucking sloppy. Official repositories take too long, fucking up SSL certificates and issues in communications.

If you wanna go for a Arch distro go for Arch itself. Or you can try EndevourOS if you want a slightly less finnicky but and easier version. Endevour does a lot of what Manjaro was good at 2 years ago but it comes with a lot less pre-packaged stuff and the devs aren't fucking up.
 
I never had a issue with the formatting of my HDs or SSDs. Ever. But now I am sp00ked hearing about possible issues. How would I detect that my SSD is about to be kill like that?
smartctl -a /dev/whatever
The interesting entries are “Available spare” and “Percent used”. SSDs don’t write in sectors, they just dump your data wherever, because overwriting the same flash over and over will wear it out. That’s why it has some spare blocks to use for wear levelling. When those run out, the drive dies.
 
Well that's terrifying. Gonna check it out later.
Personally, I don't store critical info on SSDs for too long and what I do store on them is backed up. I use SSDs for speed, everything else goes on platters.
That said, if you do some basic backup maintenance disk failures in general shouldn't be a grave concern. Buy a couple of externals at least and rotate them out periodically or something so you'd need all 3 disk to die for catastrophic loss. That should be the bare minimum.
 
  • Agree
Reactions: The Ghost of Kviv
Personally, I don't store critical info on SSDs for too long and what I do store on them is backed up. I use SSDs for speed, everything else goes on platters.
That said, if you do some basic backup maintenance disk failures in general shouldn't be a grave concern. Buy a couple of externals at least and rotate them out periodically or something so you'd need all 3 disk to die for catastrophic loss. That should be the bare minimum.
That sounds incredibly paranoid. We're not in the era of IBM Deathstars anymore.
 
Personally, I don't store critical info on SSDs for too long and what I do store on them is backed up. I use SSDs for speed, everything else goes on platters.
SSDs are better for longevity than spinning rust now, and have been for years. Enterprise gear, where reliability is everything, is mostly flash now. The niche HDDs still have is for archiving vast quantities of data.

If you want to secure your data, just keeping it on external drives isn’t enough, you’ll also need checksums and parity. With at least two drives you can use ZFS in a mirror configuration, this will keep that data safe in case bits rot or either piece of hardware fails. It’s also really easy to send ZFS snapshots onto external drives (which also should be in a RAID so the data won’t get corrupted). Use a NAS with Wireguard set up in your office or with some relatives, that way your physical backup can be in a separate location, protecting you from theft or fire. You can also easily encrypt ZFS pools, making it “safe” to leave even personal data on your offsite backup.
 
I am paranoid about losing data.
Same. Had it happen once too often, so now I've got a NAS, local offline backups, and I'm installing an "off-site" backup first chance I get (which is just going to be another NAS in a shed at the other end of the garden for now. Would have done that last one sooner, but I've been low-key expecting to move home for the last couple of years, so I kept putting it off).
 
  • Like
Reactions: Aidan
SSDs are better for longevity than spinning rust now, and have been for years. Enterprise gear, where reliability is everything, is mostly flash now. The niche HDDs still have is for archiving vast quantities of data.

If you want to secure your data, just keeping it on external drives isn’t enough, you’ll also need checksums and parity. With at least two drives you can use ZFS in a mirror configuration, this will keep that data safe in case bits rot or either piece of hardware fails. It’s also really easy to send ZFS snapshots onto external drives (which also should be in a RAID so the data won’t get corrupted). Use a NAS with Wireguard set up in your office or with some relatives, that way your physical backup can be in a separate location, protecting you from theft or fire. You can also easily encrypt ZFS pools, making it “safe” to leave even personal data on your offsite backup.
I use ZFS, yeah, but I'm not trying to sperg too much.
 
I use ZFS, yeah, but I'm not trying to sperg too much.
1678378837722.png
I think zfs send / recv might be the greatest command pair ever written in backup ass saving history. ZFS is just bulletproof. it frustrates me when people reccomend snapraid + mergerFS solution. Youre trying to reinvent zfs in a shitty fucking way.

as i see it, zfs is abiding by the policy of do one thing (filesystem) and do it well

butter fs simps are malding
 
I think zfs send / recv might be the greatest command pair ever written in backup ass saving history. ZFS is just bulletproof. it frustrates me when people reccomend snapraid + mergerFS solution. Youre trying to reinvent zfs in a shitty fucking way.

as i see it, zfs is abiding by the policy of do one thing (filesystem) and do it well

butter fs simps are malding
Over the years I've learned one thing, never back up something with the same something. So if primary data is on ZFS, the backup shouldn't be ZFS.
Replication, off-site copies, sure, but not "backups".

This is why I use snapraid and mergerFS. Check checksums, copy a bunch of data, update checksums, array goes to sleep until the next time. I don't need real time checksums on a backup as it's not changing outside the backup window.
 
Right, thus the "Scrub" part before sending new data to the box.
There's actually no need for that. The checksum is verified as the data is read, if it comes back wrong ZFS will proactively correct the error without you needing to do anything. The scrub is more so you can discover a failing drive before it turns into a failed drive, so you can do an online replacement instead of a risky resilver from parity (during which you'd have no failure resiliency).

Linus Tech Tips springs to mind. Those geniuses had a ZFS pool they never ever scrubbed. The first they knew there was problem was when ZFS had encountered so many checksum errors from faulty drives that it failsafed the pool into read-only, at which point the first thing Linus did was force it back into read-write and corrupt the remaining okay disks with the also faulty backplane by running a long overdue scrub. If they had instead scrubbed the pool regularly like every beginner's guide tells you to, ZFS would have alerted them of a growing problem probably literally years before it became an issue.
 
There's actually no need for that. The checksum is verified as the data is read, if it comes back wrong ZFS will proactively correct the error without you needing to do anything. The scrub is more so you can discover a failing drive before it turns into a failed drive, so you can do an online replacement instead of a risky resilver from parity (during which you'd have no failure resiliency).

Linus Tech Tips springs to mind. Those geniuses had a ZFS pool they never ever scrubbed. The first they knew there was problem was when ZFS had encountered so many checksum errors from faulty drives that it failsafed the pool into read-only, at which point the first thing Linus did was force it back into read-write and corrupt the remaining okay disks with the also faulty backplane by running a long overdue scrub. If they had instead scrubbed the pool regularly like every beginner's guide tells you to, ZFS would have alerted them of a growing problem probably literally years before it became an issue.
Right, that would be great if I was talking about ZFS and not snapraid.
 
Then you're clearly doing it wrong.
As I said, I don't do backups to the same flavor storage primary stuff is on. Therefor snapraid+mergerFS is as far as I can get from how my primary storage is setup while still having checksum and parity rebuild capabilities. 25% scrub once a week, sync new files to it, update parity and put it to sleep for another week. Ok, I guess I could actually make the backup fileserver Windows, that would remove even more possible kernel level bugs that can hit both systems at once. Or I could find a tape drive.
 
I use Arch as my daily. Run headless Rasparian and CentOS for a few servers. Despite the horror stories of Arch, I've never really had that much trouble with it breaking things. I think once an update screwed up my desktop environment but it was a pretty quick fix that required a minimal amount of work to get going again. Other than that, most issues wind up being my own fault by way of something I did.

CentOS is the one that gives me the most headaches though. Especially on ARM-based architecture. Lot of compatibility issues and it just being constantly behind by years on fixes and updates other distros have long moved passed. It's supposed to be the most 'stable', and yet I struggle to find instances where I'd prefer to run it over some other kind of distro, even for server and special-purpose machines that are just supposed to do a handful of things and nothing else.

CentOS was the favorite for people standing up large HPC clusters, because once you put one of those together, you don't want to ever update the OS if you can avoid it. It's far more important for nothing to be broken on a multi-million-dollar machine than for nothing to be old. However, that makes it not a good desktop OS.

It's been superseded by Rocky Linux.
 
Back