Why can't a normal raid or zfs array be used? When one drive has a problem, just change it for another drive.
I think "bit rot" has already been mentioned, but RAID-5 (and derivatives) aren't reliable anymore beyond a certain volume size because of how large individual disks are getting and how long it takes to reconstruct an array onto a fresh disk. I'd need to go digging for it, but there was some beefy research a few years ago that concluded there's a couple "points of no return" (both in disk count and in volume capacity) where if you lose a disk on an array with more than X disks or more than Y TB of total capacity, there's something like a 90%+ chance of having another disk fail during reconstruction (meaning the volume's data is lost unless you've got dual-redundancy, but even that doesn't make the numbers much better) because of the extra stress on the remaining disks having to rescan their entire data set.
It sucks, because it's a hard problem to solve. Even the big boys haven't found a better solution than "just spam at least three copies of every piece of data onto different hosts in different data centers and clone it again if a volume fails somewhere." Fun fact: if you load up a Youtube video, you're on good bandwidth and you notice it's sluggish to start playing (and reloading doesn't help), it's probably because you've lucked upon a video stored in their object store that's just (within minutes, or even just seconds) lost a volume and it's scrambling to find another copy of some (or all) of its chunks elsewhere in their store.
So far the only approach with any promise to deal with online data reliability without just stashing three copies of everything (massive waste of storage capacity) is stuff like
Ceph and its no-longer-experimental erasure coding mode, which uses the same basic parity technique RAID-5 uses, but because a typical Ceph cluster uses hundreds (or thousands) of disks across multiple nodes and naturally stores slices of everything (including parity slices) evenly-distributed among all available nodes, a single disk failure will result in the entire cluster instantly self-healing to recover the missing data and parity slices. Each disk only has to read the bits the cluster knows it needs, so overall the cluster doesn't get hit with major stress during recovery.
Of course, it's more complicated to set up a Ceph cluster (speaking from experience) than buying a Synology NAS and stuffing disks in it.
That being said the payments are a ransom we pay for them to not stab us or interact with us. They have nothing to lose and even killing them when they go feral isn't worth the bullets.
Bull-fucking-shit. It is absolutely worth the bullets. I will pay $10k right now to buy as many rounds of 7.62 or 5.56 (whichever the riflemen choose to use) as it will pay for to start putting these leeches down when they chimp out. And I'm not the only one who'd pony up that kind of cash for that noble cause.