Yep. Since version 3.8, Gnome required systemd-logind for user session management. They eventually removed it in 3.30, in 2018, but that's still 5 years of a hard dependency on a single init for absolutely no reason. There are still some gnome-extras packages that pull in logind as a hard dependency.
Backups were actually better. Shit like DATs and mini-DATs were common. Now outside of an industrial setting you barely have any choices other than using other hard drives, trash like optical media, etc. It used to be a snap to set up a tape backup with total and incremental backups. You can still get those but not at any reasonable price.
Now one of your best bets is operating out of a data center and having RAID, but as we just saw recently with the Farms, literally the entire RAID can simultaneously fail and you're fucked.
We are still slinging LTO tapes at work for offsite backup. The tapes themselves arn't that expensive. Retail on a 15tb tape is only around 50 bux. Its the drives that are big money.
Ya can have the best RAID in the world but if your OS fucks up or you get a virus/rooted then your still screwed with out a real backup.
I'm no ZFS guy, but I've been using LVM with RAID6 for over a decade now. Same deal. I've watched drives fail, messed with them as they fail, then replaced them after I get too irritated. Effortless, without data loss.
While the simulfail that Josh experienced is still possible on any RAID configuration, and backup must take this into account, the more heterogeneous your drive config, the less likely the failures will be simultaneous.
It really seems more like hardware RAID is the problem. If you need the write speed, so be it, go with hardware RAID. (Though yeah, gotta do your homework and be careful.)
But otherwise I think software RAID / ZFS will probably do for basically all home situations, and probably many server situations.
Used to run FreeNAS but wanted Docker as well on the same server. Ended up just migrating to Debain then installing ZFS for it, ran a zfs import command. It worked like a charm, import was smooth as could be, you don't get a fancy webgui, but I'd rather do 99% of things via cli anyways.
My attitude was that I'd start with FreeNAS and move to a real linux distro if I ever wanted to run anything more complicated. It's been a few years, haven't had much of a reason for that.
I'm pretty much just using it as big network drive.
The webgui is pretty decent, but sometimes it fucks some things up. I did need to ssh in to swap out a drive once.
Had to set a scrub with cron. Running RAID6 with 12 used 4TB enterprise HGST/Hitachi drives for almost a decade at this point. It just werks. Had two drives fail, was smooth to replace.
That's actually one of the benefits I liked from FreeNAS. All of the maintenance tasks are set up for you. I was reading "resilver? oh shit, do I need to do that", and thankfully that's already set up by FreeNAS.
Though I'm sure I could figure it out (especially if there's a good arch wiki article on it), a turnkey solution is nice if you don't need anything special.
I was wondering about this. I hear getting bigger drives is easy enough, but adding more drives is a pain in the ass. Did you end up going for a second vdev?
I think I'll probably just double the size of my drives at some point.
Without a cache, it maxes my 1gbe LAN, so I can't complain. 10gbe sounds nice, but until I can get an used enterprise grade 10gbe with POE for a decent price I'll hold off.
Yeah, I've got 10gbe around my house. Can linux' networking stack handle 10gbe very well? I'm guessing you might need to fiddle with jumbo frames and other configuration?
RAID6 is almost a must with 8gb+ drives. There is a non-zero chance that another drive fails while reslivering and reslivering is going to put some wear on your drives. The fact you can get 20tb+ drives now, it would be stressful reslivering a full pool without an LTO backup or another local backup.
Yeah, I had that fear. I bought most of my drives around the same time and from the same provider, so they've been on basically the same load for the past few years. If one dies, it'd be just my luck that another would shit out immediately after.
Things really are less simple nowadays with these large drives. I think there's probably a lot of people who realize what a pain in the ass it is to do proper backups, so they think "well, what are the chances...?"
I'm trying to figure out a good way to back up my home directory to my NAS.
So my home directory is way too beefy at 400+gb. I don't keep movies or music there, that's why I have the NAS. So it's supposed to be mostly code and maybe some lolcow content (which I could move to the NAS too, although I guess then anyone getting on my wifi will be wondering why there's a network drive with videos of screeching trannies and feminists).
But I probably do have some AI training set data somewhere that I don't want to delete. And maybe some movies that I was reencoding or clipping or something. Anyway, I know I need to clean it up. I'll go through it with baobab at some point.
A few weeks ago, I had a major ssd failure (the one containing this home directory). I started getting shittons of write error messages, it was tricky to boot, and after 3-4 boots that afternoon, it just wouldn't boot entirely. In live distros I couldn't mount anything. I was worrying about total drive failure.
My last home backup tarball on my NAS was from 2021, so I'd be pretty buttblasted if this was it.
Thankfully, I managed to rescue an image from the drive with ddrescue. (Btw, ddrescue is a very well designed, effective tool. I am pleased for the opportunity to use it.) The partition table was actually usable and I was able to mount it (lol, I mounted my FreeNAS smb server where the image was, and then mounted the image off of that with a loop device; the performance wasn't actually half bad) and copied it to a fresh arch install on a replacement drive that I ordered with same day delivery.
So that crisis was averted.
I need:
to be able to regularly back up 400gb (I know I need to clean it up)
would like to preserve permissions (just a convenience thing, but I'm sure a lot of all the dotfile directories that software development likes to lean on nowadays wants proper permissions)
would like it to be incremental, daily, but without using up a shitton of space (ie 600gb is fine, but I don't want 400gb * 30 days worth)
I don't really need a history of backups. I don't care how my projects folder looked last week if I've got a backup from yesterday. If I care that fine grained about something, I just use git.
I know gnu tar supports incremental dumps. I can probably go with a full backup every month, and then daily incremental backups, and reset every month.
But is there anything better? My ideal solution would be just updating the same tarball every day, but I can't find good approaches to that.
This project, ratarmount, almost sounds too good to be true, except it's written in python. I might give it a try anyway.
to be able to regularly back up 400gb (I know I need to clean it up)
would like to preserve permissions (just a convenience thing, but I'm sure a lot of all the dotfile directories that software development likes to lean on nowadays wants proper permissions)
would like it to be incremental, daily, but without using up a shitton of space (ie 600gb is fine, but I don't want 400gb * 30 days worth)
I don't really need a history of backups. I don't care how my projects folder looked last week if I've got a backup from yesterday. If I care that fine grained about something, I just use git.
I know gnu tar supports incremental dumps. I can probably go with a full backup every month, and then daily incremental backups, and reset every month.
But is there anything better? My ideal solution would be just updating the same tarball every day, but I can't find good approaches to that.
This project, ratarmount, almost sounds too good to be true, except it's written in python. I might give it a try anyway.
My attitude was that I'd start with FreeNAS and move to a real linux distro if I ever wanted to run anything more complicated. It's been a few years, haven't had much of a reason for that.
I wanted to consolidate hardware, so moved all of my NAS and Docker to my old dual E5-2670 server with 96gb of RAM. Didn't make sense to have two servers running, when one did the job just fine. Now run almost 30 docker containers (2/3 are dependencies/databases/etc) on it without it breaking a sweat. Can't imagine messing around with 10+ jails and BSD on top of that. Jails are a more specialized skill vs docker in most of the IT world.
To migrate the pool from FreeNAS, I think all I did was run zpool import and zpool mount, then I was done.
Though I'm sure I could figure it out (especially if there's a good arch wiki article on it), a turnkey solution is nice if you don't need anything special.
It's straight forward from what I'm seeing one and what I recall
zpool offline pool_name device_name
disconnect and connect new drive
zpool replace pool_name device_name
I was wondering about this. I hear getting bigger drives is easy enough, but adding more drives is a pain in the ass. Did you end up going for a second vdev?
More drives is kind of a PITA. I planned on expanding when I bought my DAS, so I used 6 of the 12 bays initially. Then when I wanted to upgrade I added the 6 drives under another vdev and then attaching it to the pool. ZFS is one of those things that needs to be planned out well before executing. I think my biggest problem when upgrading was remembering all of the terms of pool, vdev, etc as I never touch them until I need to do something to them which can be a few years apart.
Larger drives should be as simple as replacing a failed drive. Just do them one at a time until ready. The next drive of mine that fails will probably be replaced by an 8tb drive, then replace them slowly, or all at once if I'm desperate for space.
Yeah, I've got 10gbe around my house. Can linux' networking stack handle 10gbe very well? I'm guessing you might need to fiddle with jumbo frames and other configuration?
Works fine for me without jumbo frames or any tuning.
The dd is basically the limit of the array on the file server.
The iperf could get a bit faster, but it's really not worth the tuning for me, so I can just run one big happy non-jumbo VLAN.
Code:
$ dd if=big_file_on_nfs.mp4 of=/dev/null bs=8192k count=2048 status=progress
16609443840 bytes (17 GB, 15 GiB) copied, 19 s, 874 MB/s
2048+0 records in
2048+0 records out
17179869184 bytes (17 GB, 16 GiB) copied, 19.6331 s, 875 MB/s
$ iperf3 -c server
Connecting to host server, port 5201
[ 5] local 1.2.3.4 port 56024 connected to 1.2.3.5 port 5201
...
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 10.9 GBytes 9.38 Gbits/sec 2418 sender
[ 5] 0.00-10.04 sec 10.9 GBytes 9.34 Gbits/sec receiver
My backup scheme is the big-ass file server to the bigger-ass file server where all my old drives go. Main file server is RAID6, backup is JBOD+MergerFS+SnapRaid and periodic rsync backups.
Critical stuff is rsynced to a pair of veracrypt encrypted 4TB drives in a pelican case that gets left at my storage unit a few miles away. There's 2 pair so when I update one pair I take them off-site and bring the others back.
Generally a root distro like Debian or Archlinux will have the most eyes on it and so would pass scrutiny. The further down the chain you go the more opportunities for tampering.
Yeah, I had that fear. I bought most of my drives around the same time and from the same provider, so they've been on basically the same load for the past few years. If one dies, it'd be just my luck that another would shit out immediately after.
I was taught long ago by old school shizo admins when building an array to never buy drives from the same batch pool/brand model/store etc. Sounds nuts but "drive to the other side of town if you have to".
Corporate gig on enterprise drives, who cares but for home use yeah... I did buy a pair of WDs once for home server and had one fail and then used the second for rebuild which failed. Both were fully tested and had a few hundred hours of 0's and 1's.
4tb and 8tb's are quite reasonable for arrays but drive sizes like 20tb and 22tb+ you are looking at rebuild times spilling into days not hours. That is ample time for a hypothetical defect found in the same batch to present itself while you are rebuilding, especially if all the drives have the same hours and wear in addition to being the same brand/model/batch. Nothing more comfy than a new RAID array with 100% matching drives, right down to the cylinder geometry but it can also be theoretically catastrophic and avoidable.
was never a fan of hardware raid tbh, but my PTSD is mostly from people using the onboard controller in a small setup, and when boards need to be changed (either voluntarily or involuntarily), you better pray the new controller works or worse doesn't mess with it. same reason I'd always advice to use an external card, since you can at least migrate that (unless that fails for some reason).
The drive trays for my server come in next week, I'm planning on just using JBOD and mergeRFS. That way if a drive fails I only lose whole files while the rest of the files stay intact and accessible, then i can just redownload the files I lost from the internet.
But then, for my needs my server is just an organized local cache for data readily accessible on the internet