Open Source Software Community - it's about ethics in Code of Conducts

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
I never understood why people used docker so much until I read about guix and nix itt. As much as docker is bloat I’m allergic to, this solution sounds worse.
Containerization is useful for some cases but it gets way overused. Jails (BSD), Cgroups (stuff like Firejail on Linux) are a way lighter way to securely compartmentalize processes without the bloat and overhead of duplicating an entire operating system installation that current containerization solutions require. You have to have a separate rootfs for every container. Either from an image that gets unpacked or from a permanent on-disk rootfs installation, similar to a regular VM, like in LXC. The problem is that this wastes space (hundreds of megs for a minimal OS rootfs just to run a single binary) and introduces massive security problems due to the way these rootfs images are distributed, nearly always precompiled rootfs tarballs that you have no control over that are just downloaded from some cloud server with no integrity checking beyond a https connection and some checksumming. A massive hole for supply chain attacks. Sure, you can build your own images from the ground up, but almost nobody does that, and you can host your own registry server to save images in, and you can enable signing and signature checking (I'm talking primarily about Docker) but that's a ton of work for what's a solution that didn't need to exist in the first place.
If you have the source to a program, and the program is well written, you could compile it to run with the libraries that are present on your root OS installation, and just use jails or cgroups to isolate it, eliminating the entire supply chain attack surface and bloat that rootfs images bring. But a lot of programs are written like shit, have too strict dependency versions, are written in shit languages that require their own package managers and package distribution systems (like Rust, NodeJS, Golang,...), or many other reasons that are because the program is not written in a portable manner, or is not entirely open source at all.
The solution to that is to only use FLOSS programs that can run on the OS that you're running.
 
With docker you know where all the important bits are and have control over what files it has access to. It does have it's benefits
Other containerization or jail systems allow that too. Docker just got the most popular because they put out the most advertising. Docker just uses cgroups underneath anyway, it's just a frontend. Cgroups are part of the kernel.
Edit: cgroups, namespaces and mounts (overlayfs for image layers, bind for volumes). It also has stuff like swarm and stuff more useful for large clusters of servers, not useful for single machine work.
 
With docker you know where all the important bits are and have control over what files it has access to. It does have it's benefits
Look, I just have a homelab. I don't run anything at scale. The only time I would ever use containers is if it's some program that is AIDS to install/use otherwise. My homelab runs on a Gentoo box that I haven't touched in years. It just werx. My use case is not complicated enough to not just install most things on the main system.

But if it were more complicated, having seen what it takes to deal with a more complicated dependency tree system like guix it's no wonder more people use it and yes, it is laziness enabler, but until recently when RAM became the new bitcoin, it was probably a time/cost savings to just do it that way, so I get it. But like I said, the notion you have to give a single app an entire OS install just offends my Atari 8-bit owning inner child. 48 fucking kilobytes. And it entertained me for hours and hours.

Seems to me the real problem is undercooked jeetware that is too lazy to implement its own stuff and so it sews together 8 gorillion packages of other jeetware. It's jeetware all the way down, isn't it?
 
Look, I just have a homelab. I don't run anything at scale. The only time I would ever use containers is if it's some program that is AIDS to install/use otherwise. My homelab runs on a Gentoo box that I haven't touched in years. It just werx. My use case is not complicated enough to not just install most things on the main system.

But if it were more complicated, having seen what it takes to deal with a more complicated dependency tree system like guix it's no wonder more people use it and yes, it is laziness enabler, but until recently when RAM became the new bitcoin, it was probably a time/cost savings to just do it that way, so I get it. But like I said, the notion you have to give a single app an entire OS install just offends my Atari 8-bit owning inner child. 48 fucking kilobytes. And it entertained me for hours and hours.

Seems to me the real problem is undercooked jeetware that is too lazy to implement its own stuff and so it sews together 8 gorillion packages of other jeetware. It's jeetware all the way down, isn't it?
It's not as much RAM usage (if a container is competently made, it shouldn't be like an entire VM with all the system daemons, it should just be one or two processes at most (the actual application and maybe a minimal init process)). Containers can be used to run entire VM-like OS's in them, but that's considered a bad practice, at least in the Docker world.

LXC has the exact opposite approach, it runs containers like VMs. So there is an entire init system that boots everything, usually some gettys, cron, a syslog daemon, ssh daemon, and so on. There it's understandably more bloated in the RAM department. But LXC considers each container like a permanent installation, it gives each its own root folder on the hard drive, when you spin up a new container, even if it's from a template, it just unzips the entire container into that folder and boots it. It doesn't do any layering like Docker does. So it's again more like a traditional VM.

The problem with Docker and the way most use it, is they use premade images from Docker Hub. They just trust that this precompiled rootfs image that they pull from the Hub that was made by god knows who is actually free from malware and hasn't been tampered with in the entire path it took from the developer or build server that compiled it to you. Which is only guaranteed by the image (layer checksums) that are downloaded over https. So no authenticity or end-to-end tampering guarantees at all. You're basically back in Windows, downloading a program from the internet and running it. You didn't compile it, you don't know what it's doing.

And even then, you now have to download and store all these rootfs layers of all the images on your filesystem. You use one image - it's based on Ubuntu 24.04, another image is based on Debian 12, a third image is based on Alpine. It has to download them all. Then it has to unpack and mount all the layers to get the final container rootfs. So this is where all the bloat is.

There is a way to do Docker containers right, make your own images from scratch, you can bootstrap them from your own system. But nearly nobody does it as it's extra work that almost no company wants to pay for. If you're doing it for yourself, you can spend the time, but in a company that just prioritizes saving money and doing things as fast as possible, they're gonna go the most quick and dirty way, and not care about the potential security implications.

Personally I play around with firejail on Gentoo, I managed to firejail a browser with clipboard sharing, not much more yet. At work we just use Docker with images based on stuff pulled from the Hub and I hate it, I wish I was paid to make my own, but alas.
 
There was/is a company that specialized in application containers, just the app and dependent libraries and not all the cruft from starting from an OS.

Ahh, here it is, Chainguard's distroless. And others I'm sure.
 
There was/is a company that specialized in application containers, just the app and dependent libraries and not all the cruft from starting from an OS.

Ahh, here it is, Chainguard's distroless. And others I'm sure.
There are appimages and snaps. But they're all bloated and not liked by the purists.
 
Loving the salt from the Linus Tech Tips new Linux challenge video.

I can grab screenshots if there's interest, but it's mostly random linux fans mad about it and going off in various comments sections. Their general arguments are the same.

  • LTT is scum and shouldn't ever touch Linux!
    • also he should use his platform to promote Linux! (This reminds me a lot of a similar outrage to the PewDiePie video)
  • He used the wrong distro!
  • He should just wait a few months/years until PopOS is in better shape.
  • He should use his industry contacts to fix all these issues for him (7:05 in the original video answers this question.)
  • Luke is the GOAT! He chose the right distro, didn't have any major problems.
    • Right until he chooses GRUB during install. Again, this is called out in the video when people gave him shit for choosing Mint in the previous challenge.
  • And the classic "he shouldn't want to play those games or use that software/hardware". Some even going as far to say wanting to watch video, work, and/or game is an outrageous ask. (What do Linux people do on their machines then?)
Linux people living up to the stereotype as always.

There's some react videos out there already. Some aren't bad tbh, but some are horrible. A favourite is a guy who's answer to PUBG not having Linux support is to use GeForce Now. (GeForce now is widely considered shit, and PUBG isn't on GeForce Now.) One I didn't watch has issues running a YouTube video in full screen.
Linux users and not knowing that telling people to RTFM whenever they ask a question is toxic and why nobody likes their community. Name a more iconic duo.
 
There is a way to do Docker containers right, make your own images from scratch, you can bootstrap them from your own system. But nearly nobody does it as it's extra work that almost no company wants to pay for.
Speaking of things that Nix/NixOS is brilliant at…
 
Another thing I've encountered is some iso's aren't bootable from usb, only from a cd. Yes, that's retarded, but I've had some firmware update disks for network cards and servers be like that. I found that you can sometimes make those iso's bootable with a tool I can't remember the name of.
You're thinking of geteltorito.
 
Linux users and not knowing that telling people to RTFM whenever they ask a question is toxic and why nobody likes their community. Name a more iconic duo.
I've never understood why people are so hostile to reading a manual, especially if it's a good manual. If your car is fucked, you pull out the manual and read it to understand the problem. If an appliance doesn't work like you expect it to, you pull out the manual and read it. It's not an absurd thing to do to ask people to understand the devices they use.

Of course, the actual reason why this is such an offensive ask to people is that most people are genuinely just goy and want nothing more than to be butt-fucked in the ass by something they can't understand in the name of "convenience", but those people are honestly a lost cause and every time they whine about their masters doing something they don't like, I can't help but roll my eyes. They actively build their lives around publicly traded companies whose whole goal is to butt-fuck them for an extra 0.3% growth in shareholder value, and then they're shocked when they get BJC (big jeet code) rammed right into their daily-driver operating system. It's honestly quite Sad!, but expected out of your average person.

The only time RTFM is just bad advice is when you're using dog shit software that doesn't have good documentation, which is not a problem unique to anything Linux related (ever tried to debug a niche Windows problem? It's agony. Woe upon those who have to drudge through Microsoft documentation and the official Windows support forum).

As an aside, I'd argue that good documentation is far more important than good code (insofar as you take the principle reasonably). I'd rather use a more complex system with a damn good manual than some valid trans girl's "rust-^.^-boymoder-estrogen" cargo library that runs as fast and safely as possible but has like 3 examples of its usage. Software is for people to use ultimately and if its bad for a person to use, it's a bad piece of software.

The fact a "documentation first" mentality isn't the norm is a testament more towards the terminal flaw of all FOSS advocacy: Every single one of its members has high-octane Aspergers syndrome, usually with another dollop of a lack of self-awareness on top of it.
 
As an aside, I'd argue that good documentation is far more important than good code (insofar as you take the principle reasonably). I'd rather use a more complex system with a damn good manual than some valid trans girl's "rust-^.^-boymoder-estrogen" cargo library that runs as fast and safely as possible but has like 3 examples of its usage. Software is for people to use ultimately and if its bad for a person to use, it's a bad piece of software.
When you git gud, you don't need any documentation, you just go read the source and figure out what the thing does yourself. Eventually.
Joking aside, yeah, documentation is important. I don't document every function or line of code though. The code you write should be self explanatory. Add a comment where necessary where things get confusing, or to explain the data structure of stuff, or if you have complex functions, document their inputs and outputs.
Writing a good manpage is more important than documenting the code.
 
I never understood why people used docker so much until I read about guix and nix itt. As much as docker is bloat I’m allergic to, this solution sounds worse.
1773288618763.png

Less jokingly, you can think of it as the logical continuation of static linking.
 
I remember reading Ventoy was made by a shady profile that couldn't be trusted, that something was wrong with it, can't rememver exactly what, but I don't use it because of that, I just bought a box of dirt cheap usb drives and dd isos directly to them. How hard is it to remember
dd if=file.iso of=/dev/disk/by-id/usb-crappy-flash-drive bs=1M oflag=direct status=progress
The only time it failed for me was when a laptop's UEFI does not support whatever the filesystem Windows ISO came with, yet FAT32 has a 4GB file limit which means manually splitting the WIM image file.

As for cat to copy the image directly into the USB drive, the reason it fails is because shitty USBs just stopped responding after chugging along at 2MB/s for 15 minutes if you don't control write/sync sequences. It's also hard to tell if cat has truly completed or if its entirely written only in cache.
 
I don't document every function or line of code though. The code you write should be self explanatory. Add a comment where necessary where things get confusing, or to explain the data structure of stuff, or if you have complex functions, document their inputs and outputs.
This mentality usually works for small projects, but the second anything involves more than a single person or gets too big, I always start to get skeptical of light documentation or the concept of "self-explanatory" or "self-documenting" code.

As a professional stance, I always tell my team to document their classes, functions, and non-internal variables as a rule, but the effort put into the documentation may be proportional to the expectation that other developers will use said documented code.

This is namely because other developers tend to have different definitions of what is and isn't self-documenting, especially as they become more tenured in a code base and understand its ins-and-outs. A developer that has spent years in a code base is likely to know the general "patterns" of development and see them as obvious that a programmer new to a code base, even if they're more experienced, wouldn't be clued in to.

That's why im very adamant about documentation, it lets people get a general gist of the intent, patterns, and functionality by letting a programmer highlight what is important about a piece of code to other programmers. Even more banal functions always work to set up a program's internal "pattern" of behavior, so making my programmers document something, even if its something minimal, always helps inform other programmers about what is and isn't valuable to know about a given function.

You wouldn't know how many times being this aggressive about documentation has saved me and my team. Most of my developers can be thrown deep into a system they don't understand and eventually get the hang of it, but the systems with aggressive documentation tend to make their return from the jungles of a code base with a fixed bug or new feature much quicker. It's the difference between months and a week.

tbh ts post is more-so me info dumping about my work than anything relevant, but I figured I'd give some anecdote on why I value documentation more than code in most circumstances.
 
And it doesn't work half the time in my experience, and for some reason is godawfully slow. I mentioned dd in my post and how unsatisfied I was with its performance (even though something that old should be completely reliable).
That's because dd isn't meant for writing disks like that. The obsession with dd comes from god only knows where, but it's original purpose copying data in a specific block size from one file to another, which made it useful a long time ago when such things were much more tightly constrained. It has nothing to do with disks and these days only slows things down, but I guess it sticks around because it lets you sudo a single command, even though it's a needlessly complicated one. Any, and I mean any file utility is better for copying an ISO to a usb stick. My preference is sudo sh -c 'pv mystupidimage.img > /dev/sda' which is essentially just cat with a built-in progress bar.
 
There shouldn't be a large speed difference between dd and copying the file to a filesystem on the same flash drive, if anything, dd should be faster. Did you not use oflag=direct and was looking at the progress stall to almost 0? Then you were looking at the in-memory cache filling up (usually very fast at GB/s), then stalling before new cache space is ready after the cache is flushed to the block device at the device's write speed. So the complete write operation should take the same amount of time or even faster with dd.
I'd agree, there are probably a dozen different things I did wrong, but the dude I was replying to gave a nearly perfect command line that should work all the time and I actually had as a two letter alias, and yet somehow managed not to work much of the time.

Something as fundamentally simple as "turn this ISO into a thumb drive that literally everyone in the world has into something that can boot" shouldn't require this level of bullshit.

So far as I can tell the main objections to Ventoy are the ever present "binary blobs" issues. Well, I hate the general concept of those. But having an Nvidia card, if I want that to work more than half as well as it can, I'm sucking down the dick of a binary blob right there, because the open source drivers blow. Can I audit their security? No. Do I want CUDA to work? Yes. So I gotta suck Satan's cock for that.

Do I want to spend three days figuring out what exact shit I have to tell dd to get it to work, because it takes an hour or so for it to crash for no comprehensible reason? Not really. If I can just drag an ISO to a drive and there it is in a menu, I'm okay with it. Maybe I'm retarded and it's a huge gayop to compromise every distro in existence. If so, I'll admit I suck. In the interim, shit works.
My preference is sudo sh -c 'pv mystupidimage.img > /dev/sda' which is essentially just cat with a built-in progress bar.
It can also just instantly nuke everything with a typo but yes, a brave move. The dd options also have a progress bar, though. I'd honestly settle for progress just indicated with shit like ###### like old ftp clients had.
 
Last edited:
I've never understood why people are so hostile to reading a manual, especially if it's a good manual. If your car is fucked, you pull out the manual and read it to understand the problem. If an appliance doesn't work like you expect it to, you pull out the manual and read it. It's not an absurd thing to do to ask people to understand the devices they use.

Of course, the actual reason why this is such an offensive ask to people is that most people are genuinely just goy and want nothing more than to be butt-fucked in the ass by something they can't understand in the name of "convenience", but those people are honestly a lost cause and every time they whine about their masters doing something they don't like, I can't help but roll my eyes. They actively build their lives around publicly traded companies whose whole goal is to butt-fuck them for an extra 0.3% growth in shareholder value, and then they're shocked when they get BJC (big jeet code) rammed right into their daily-driver operating system. It's honestly quite Sad!, but expected out of your average person.

The only time RTFM is just bad advice is when you're using dog shit software that doesn't have good documentation, which is not a problem unique to anything Linux related (ever tried to debug a niche Windows problem? It's agony. Woe upon those who have to drudge through Microsoft documentation and the official Windows support forum).

As an aside, I'd argue that good documentation is far more important than good code (insofar as you take the principle reasonably). I'd rather use a more complex system with a damn good manual than some valid trans girl's "rust-^.^-boymoder-estrogen" cargo library that runs as fast and safely as possible but has like 3 examples of its usage. Software is for people to use ultimately and if its bad for a person to use, it's a bad piece of software.

The fact a "documentation first" mentality isn't the norm is a testament more towards the terminal flaw of all FOSS advocacy: Every single one of its members has high-octane Aspergers syndrome, usually with another dollop of a lack of self-awareness on top of it.
Notice how in your long-winded drawn out autistic rant you address the point as if it's the opposite. Nobody says rtfm on a forum with good documentation. That's the point. The only forums where rtfm is expelled from lips like a Bible verse are the troon infested dogshit with terrible documentation. So you recognize the problem I'm calling out and the toxicity. Yet you pretend not to. I'm not against calling for people to read the manual. But if your default response to any question is rtfm then it implies your documentation is dogshit to begin with. Because an is with good documentation would see it's documentation visited more than its forums. Unity is a prime example. Its documentation is so pristine that the only time people ask for something in the documentation it's usually a newbie retard so people talk slowly like they're a retard. Meanwhile arch is a troon infested distro that is so known for its faggot femboy furry troon problem that the toxicity acts as a shield to prevent straight men from entering their vicinity. Hence why rtfm is thrown around like a Bible verse in the community. And this is coming from someone who runs arch just fine. Recognition that the documentation is absolutely dog shit and that the only reason I got answers is by being hostile and asserting what I knew was false to force the correct answer from them instead of genuinely asking questions is a very good reason most people refused to adopt it prior to more user friendly distros like Cachey and Manjaro. Arch is easy to use once you learn this strategy. But notice the only way to get answers from the arch troons is to troll them out of them. If the arch community understood what a woman is they might understand that the average human being finds their documentation to be dogshit. And thus improve it to the standards of every other distro. Which btw. Because of subdistros doing just that they have.

It's almost as if hundreds of millions of people telling you something is retarded and you say no and it's a product for commercial use not a research tool. You're the retard. And you've got an allergy to money. I would say get a job but having income would probably kill you.
 
So far as I can tell the main objections to Ventoy are the ever present "binary blobs" issues. Well, I hate the general concept of those. But having an Nvidia card, if I want that to work more than half as well as it can, I'm sucking down the dick of a binary blob right there, because the open source drivers blow. Can I audit their security? No. Do I want CUDA to work? Yes. So I gotta suck Satan's cock for that.
Well there's your problem, you're using NoVidya. Try AMD, it actually works fine with the open source driver (like 90% of the time, probably not for some very new cards), dunno about CUDA cause I don't game or do any computational stuff but it probably works (amd has opencl, hip or rocm according to wikipedia).

I've had so much problems with NVidia in Linux I just instinctively rip them out and put in an AMD (or just use the integrated graphics which is more than enough for what I need). Open source nouveau driver is just dogshit, doesn't even enable frequency scaling or power management on the card right, fans spin at full speed, no acceleration, lagging and stuttering even in a basic desktop. Then I tried the proprietary driver and it sometimes compiles and installs but then just gives you a black screen after you reboot so you have to chroot in and reinstall or uninstall it. That happened one too many times before I said fuck it and swore off nvidia.
But I'm not a gamer and don't do computational stuff so I usually don't even have a gpu, just use the integrated graphics.
My preference is sudo sh -c 'pv mystupidimage.img > /dev/sda' which is essentially just cat with a built-in progress bar.
You're right, that is a simpler and arguably better way to do the same thing. Just one thing, NEVER use /dev/sdX because you WILL fuck it up one day and wipe your OS drive or lose your data. ALWAYS use /dev/disk/ (by-id is my preference) because then you at least have to intentionally fuck the path up to write it to the wrong disk.
That's because dd isn't meant for writing disks like that.
Why not? Okay, it is convoluted and maybe you need to look up the manpage to remember all the options, but it is perfectly suited for the task of writing stuff to a disk device. Why does it default to a 512b block size? It was written with the intention of using it on block devices with 512b sectors.

I often use it to check whether drives have already been wiped:
dd if=/dev/disk/by-id/foo count=1 | hexdump -C
or
skip=1000000 |hexdump -C
to read an arbitrary sector somewhere in the middle or the end of the drive to see if it has only been partially wiped.

But it is an universal tool that can operate on any stream of data or file so you can also use it for other things.
I'd agree, there are probably a dozen different things I did wrong, but the dude I was replying to gave a nearly perfect command line that should work all the time and I actually had as a two letter alias, and yet somehow managed not to work much of the time.
I don't know what didn't work for you because that's not enough information to diagnose your issues, but dd has never failed for me. If you have a defective flash or disk, that's not dd's fault. You will get I/O errors logged in dmesg. If you don't use oflag=direct you'll fill up the ram buffer and then the transfer will seemingly stall like I mentioned in a previous post, but that's not dd's fault, that's how the OS block device caching works, that's why oflag=direct exists. But in either case data will get written to the block device and at approximately the same speed. Maybe you were noticing the slowdown after writing a lot of data to the flash drive because it itself slowed down, like it thermal throttled or something.
 
Last edited:
It was conceived originally for converting between file data formats and swapping endianness, amongst other things, which is why it has the ability to seek to specific addresses and read specific lengths and/or counts of specific block sizes in a file, and why it can perform all sorts of conversion operations on its data, e.g. byte order reversal. Its blocksize argument is only tangentially related to disk block sizes, but gained undue prominence in the folk history people invented for "disk duplicator" or "disk device" as a tool for writing images to disks. This is something it can do just fine, but it's probably the least efficient way to use it. Sort of like using a swiss army knife as a hammer. Any other command is significantly faster than dd, and less likely to run up against memory problems or excessive latency from lots of write operations. If you need to zero out a portion of a disk or perform some level of emergency data recovery, that's one thing, but using it to read-write sequential blocks is... well. Silly. The only reason it sticks around is because, like I said, you can run it as a simple one-line sudo command without having to worry about the privileges of your stdout redirection.

You're right, that is a simpler and arguably better way to do the same thing. Just one thing, NEVER use /dev/sdX because you WILL fuck it up one day and wipe your OS drive or lose your data. ALWAYS use /dev/disk/ (by-id is my preference) because then you at least have to intentionally fuck the path up to write it to the wrong disk.
100% agreement. I've lost too many home drives to that sort of thing.

e: speeling misteaks
 
Last edited:
Back
Top Bottom