Diseased Open Source Software Community - it's about ethics in Code of Conducts

The reason ZFS is not as popular as it could be on Linux is mostly because of the legal situation surrounding it that prevents it from being mainlined into the kernel. As a result, packaging it on its own is extremely problematic. The ZFS On Linux Team only builds new releases for LTS kernels, and if you're not on LTS there's no guarantee it won't just crash (or worse).

On Arch (which always uses the latest stable kernel), they have a separate package repo that contains the ZFS packages, and a group of people that do burn-in tests with the latest kernel releases to empirically test whether it's busted. When it breaks, you can end up blocked from upgrading or even installing new packages on your system for weeks to months due to how pacman works.

But, if you're on an LTS kernel / distro that has good support, it works really well.
 
  • Like
Reactions: YoRHa No. 2 Type B
The reason ZFS is not as popular as it could be on Linux is mostly because of the legal situation surrounding it that prevents it from being mainlined into the kernel. As a result, packaging it on its own is extremely problematic.
I believe redhat does/did package it via a shipped precompiled binary ("Its not a CDDL violation that way!"). But redhat also has the money to fight off a proper lawsuit by Sun whereas most regular devs don't.

More worrying is (Open) ZFS native encryption being busted forever.
 
More worrying is (Open) ZFS native encryption being busted forever.
I've been using ZFS native encryption for years on both storage and root drives (root via ZFSBootMenu). I've never run into the zfs send/recv corruption issue, but I do manually make my snapshots and send them to backup drives. From what I understand the issue was related to multiple zfs sends running simultaneously?

Looks like the main github issue has been closed. Is this still an issue?
 
I've been using ZFS native encryption for years on both storage and root drives (root via ZFSBootMenu). I've never run into the zfs send/recv corruption issue, but I do manually make my snapshots and send them to backup drives. From what I understand the issue was related to multiple zfs sends running simultaneously?

Looks like the main github issue has been closed. Is this still an issue?
I think they fixed some of the major ones but there's still some serious bugs. Of course it's a gentoo (my distro of choice) compile, so there's the chance the user might just be a complete fucking retard and misconfigured something (SAD! Many such cases!).
Immutable data corruption(?) after hitting #13709 #14166
This appears to be a scrub related (possibly hardware too) issue. Which worries me because I remember whe BTFS scrub would yeet your installs.
Just snapshotted my machine over to a virtual machine to see what a scrub would do, & by pure luck I can definitely note that scrubs are prone to causing this bug, at least if it's already hit the pool. Random files that were readable right before the scrub are made immutable & unreadable after the scrub.

There's the one you mentioned that they've been trying to track down for a while. Might be related to above, but they suspect it isn't fully fixed yet. I think someone opened a ticket about a regression recently but it's not show stopping with a panic and being unrecovverable anymore so that's good (?).
panic in zfs arc_release during zfs send of encrypted dataset
There's a few more hanging around:
Better than what it was before, so I'm not complaining about them working on it. Supposedly the OpenZFS foundation actually acknowledged it was a major issue and is trying to fix them all.
 
Last edited:
  • Informative
Reactions: FinallyRealEmail
If you're not aware, Broadcom purchased VMWare from Dell and immediately tripled the prices for license renewals. They've since stopped allowing VARs to sell the licenses, and I've been told that even buying direct isn't an option anymore unless they deem you worthy.
Don't forget they removed the free ESXi version and changed their minds just recently, probably from users leaving to proxmox.

I'd love to leave VMware and go to proxmox at work, it would save us a ton of money with how expensive it's gotten. I think they even dropped their perpetual licenses so you're stuck on subscriptions.

I'm still on Proxmox 8 at home. Anyone try out new version yet?
 
Networking is leagues better in BSD, particularly when it comes to state handling and routing. That's why it's a much better alternative to Linux when it comes to firewall and router appliances. Not to say that Linux can't do the job (see: Openwrt), but BSD tends to be the preferable option there. Also BSD's design philosophy results in a lowered attack surface.
Not wanting to start a flame-war but can you expand on this?
Direct DM's are fine if you think it would derail the thread, I am just curious.

If anything with routing and linux is that it changes too fast unless you work on it daily.
ipchains: lol grandpa, we switched to iptables after your third kid was born
iptables: lol we use nftables now, old man.

Every time I need to dive deep into packet-filtering it seems like the command-line tools an the API has changed.
 
Last edited:
Not wanting to start a flame-war but can you expand on this?
Direct DM's are fine if you think it would derail the thread, I am just curious.

If anything with routing and linux is that it changes too fast unless you work on it daily.
ipchains: lol grandpa, we switched to iptables after your third kid was born
iptables: lol we use nftables now, old man.

Every time I need to dive deep into packet-filtering it seems like the command-line tools an the API has changed.
All good. Here's some deep-dive answers that can go way more into the technical weeds than I can.

Most of the benchmarks I can find are old, but one thing to consider is that the BSD network stack is very stable and very old, meaning it's had a lot of time to be developed and pushed through. The network stack has been around as long as Linux has in its entirety. The Linux network stack has undergone a number of revisions and major changes in the years.

Again, this isn't to say that the Linux networking stack is bad. Just when we're talking about big network and security devices there's a better choice, which is usually BSD.
 
  • Agree
Reactions: Samuel Fuller
This sounds like the starsector mod shit all over again. Some 4channer made a rape mod to spite the devs modders and the devs modderswent full retard and made malware that would fuck with your computer if you had the mod.
That's what TurboDriver did to ColonelNutty then tried to get a PA here too. What's it with people and making addons to destroy computers if you have a software with another?
 
Networking is leagues better in BSD, particularly when it comes to state handling and routing.
My only complaint with BSD and networking is that it's so antiquated that in TrueNAS Core they default to New Reno for TCP congestion control. It creates a horrible sawtooth effect when streaming a large amount of data over a high latency link so at first my ZFS syncing was a non-starter. After I realized what was going on, I changed it to CUBIC, which doubled the maximum throughput and made the sawtooth effect far less pronounced.

Anyone running a BSD server should look at what net.inet.tcp.cc.algorithm is set to and if it's newreno, you should consider adopting another algorithm. CUBIC is a decent all rounder and the default in Linux and recent versions of Windows.
 
  • Informative
Reactions: Gutless
Back