The Linux Thread - The Autist's OS of Choice

Honestly docker is pretty neat, when properly set up all you need to do is clone the core mount points and the config.yml file. For me the mount points are /mnt/docker/* and /home/docker/* (as /home is a larger drive) and the /data pool
It's fucking bullshit if you don't have local storage though. You can't just mount the docker directories to an NFS folder, they require features that are unsupported by nfs. If you want to use network storage you have to mount a loopback file, it sucks.
on the desktop all three have declined in the user experience from their heydays
PVRE schizophrenia
 
https://docs.docker.com/reference/cli/docker/volume/create/ mentions specifically creating a volume on NFS. Maybe it doesn't work when it's a bind mount?
No, I'm talking about storing the containers and/or their build artifacts themselves. Not data for containers. I have really cheap shitty boot SSDs in my good 14700f "server" because I have a massive fuck off NAS to actually store data, and when I've had to compile modified versions of containers for particularly large projects containing cuda/pytorch etc not only do I get fucking I/O limited with local storage, obviously it *runs out* when I keep changing broken shit and retrying the compilation, and this shit takes hours to compile so wiping out the cache completely isn't desirable either. I hate docker so much!
 
No, I'm talking about storing the containers and/or their build artifacts themselves. Not data for containers. I have really cheap shitty boot SSDs in my good 14700f "server" because I have a massive fuck off NAS to actually store data, and when I've had to compile modified versions of containers for particularly large projects containing cuda/pytorch etc not only do I get fucking I/O limited with local storage, obviously it *runs out* when I keep changing broken shit and retrying the compilation, and this shit takes hours to compile so wiping out the cache completely isn't desirable either. I hate docker so much!
Oh right. Yes, I think it needs to be either overlay2 or btrfs.
Though there's nothing stopping you from mounting your NAS into /root or /tmp in your container, and doing your work there. Or, you know, just pay $50 for an NVMe.
 
Been distrohopping a little bit more.

I have overall decided arch is what i will stick with but I wanted to keep trying other stuff with no real expectations of keeping it.

I've come to find out I'm too dumb to use runit. (At least not use it well). But openrc is fine. And as much as people hate systemd. I just works really well.
 
Oh right. Yes, I think it needs to be either overlay2 or btrfs.
Though there's nothing stopping you from mounting your NAS into /root or /tmp in your container, and doing your work there. Or, you know, just pay $50 for an NVMe.
Or loopback mounting /var/lib/docker as a block file on the NAS over the network, which is the most sane option. I can't use an NVME. All the PCIE is taken. I put all my expensive storage in my NAS FOR A REASON! Maybe docker should be LESS SHIT and work with CONVENTIONAL STORAGE SYSTEMS.
 
Ubuntu is good, unironically. I wouldn't use the stock UI and would opt for XFCE personally but as a distro it's very straightforward, just works, and everything works with it.
True, for the average person its a "just works" distro, and the desktop environment flavors are cozy. Though I haven't used it since around the time Snaps first got introduced so I can't say much about how good it is now.
 
I feel like my experience of Arch being the first distro I actually tried kinda spoiled me, because I could never use Ubuntu without thinking "Ugh too much useless stuff, and MAN does this look ugly." There are the flavors, yeah, but those aren't the same to me.

I also took it a step further by deciding "You know what? Fuck you systemd, you slowed down my boot process for TOO LONG!" and switched over to Artix with OpenRC, which works like a charm. My only complaint is having to install -openrc variants of packages, but oh well it's better than systemd's shenanigans.
 
I feel like my experience of Arch being the first distro I actually tried kinda spoiled me, because I could never use Ubuntu without thinking "Ugh too much useless stuff, and MAN does this look ugly." There are the flavors, yeah, but those aren't the same to me.
My first Linux distro was Red Hat 6.

Ubuntu Breezy was the first linux distro I was ever able to just get working out of box with no additional config.

They've made some missteps over the years but Canonical has earned a reputation of making stuff that is no-fuss, while being relatively up-to-date with the latest release & offering long tail compatibility with LTS releases.
 
No, I'm talking about storing the containers and/or their build artifacts themselves. Not data for containers. I have really cheap shitty boot SSDs in my good 14700f "server" because I have a massive fuck off NAS to actually store data, and when I've had to compile modified versions of containers for particularly large projects containing cuda/pytorch etc not only do I get fucking I/O limited with local storage, obviously it *runs out* when I keep changing broken shit and retrying the compilation, and this shit takes hours to compile so wiping out the cache completely isn't desirable either. I hate docker so much!
I may be mistaken but isn't running code over the network a bad thing? Like any latency or dropped packages can cause the containers to error or crash?
 
I may be mistaken but isn't running code over the network a bad thing? Like any latency or dropped packages can cause the containers to error or crash?
Directly attached storage has latency and r/w errors too my guy. There are billions and billions of docker swarms that are operating out of network storage right now, they just aren't directly mounting an nfs share to /var/lib/docker (since that is impossible). Core operating sytems are netbooted (PXE) all the time... It's taken into account.
 
  • Like
Reactions: Rekeita's Kidneys
Directly attached storage has latency and r/w errors too my guy. There are billions and billions of docker swarms that are operating out of network storage right now, they just aren't directly mounting an nfs share to /var/lib/docker (since that is impossible). Core operating sytems are netbooted (PXE) all the time... It's taken into account.
I may be mistaken but don't those docker storms precache the container images? Like there's a repository where the images are stored, and when a specific unit goes to deploy an image it downloads a copy of it to local storage before starting it?
 
I may be mistaken but don't those docker storms precache the container images? Like there's a repository where the images are stored, and when a specific unit goes to deploy an image it downloads a copy of it to local storage before starting it?
It depends. It doesn't have to be docker either. Diskless servers are a thing.
 
It depends. It doesn't have to be docker either. Diskless servers are a thing.
Diskless servers load the os and programs into a ramdisk, or offload processing to a different server that has the applications installed locally. nobody is trying to run an application directly off a network drive, and any relevant guide where the topic comes up says not to do it.

some games and programs now don't even run on hard drives and must be installed onto a ssd.
 
Last edited:
Back