The Linux Thread - The Autist's OS of Choice


Some linux and google news.

On the topic of ricing a few pages back.

I like ricing. I don't go TOO crazy with it. But i do like spending some extra time getting my window manager set up exactly how I want it. So i can spawn what i use often with keybindings.

To have relivant system info in my bar, like cpu temp, internet commection. Memory usage, and some stuff in the sysyem tray to make connecting to wifi bluetooth etc fast and easy.

Also I like getting it looking nice. I feel like spending a little extra time making your system pleasant to look at, makes using it nicer. Thats just my opinion though.
 
Can't you use sshfs if you need to connect to a remote file system? Seems that NFS is a nightmare to work with.
NFS is fine. My main problem with it is that it hangs if it can't reach the server, rather than failing semi-gracefully like SMB does on Windows.

Yes ext4 is a file system. You can make anything a file share if you know what you are doing.
If you're sharing a block device and somehow mounting on multiple computers at once, via NBD or whatever, that's a very quick way to corrupt your files.
 
Kindve a specific question but has anyone here done self hosted K8s?
I can't speak to k8s, but I have recently been nix-pilled.

It makes me feel very powerful. I've been using Linux for 15 years, and it's sort of a similar feeling as when I first switched. I suspect this will give you the ability to do much the same thing as k8s, without the overhead of using crappy corpo software.

There is a nifty little deployment tool for managing clusters of instances: https://github.com/zhaofengli/colmena
 
Last edited:
It's just ancient and designed for wired networks where every client had a static IP, since, well, that was the only thing in existence at the time.

There's very little fault tolerance. If you're spinning up an enterprise NFS server accessed by remote employees on sketchy wireless connections pour a forty out for your files.
Sorry for double-post, but I also recently learned that ZFS has built-in samba support, likely it can do a ton of optimizations since it has access to the block devices on the hardware level. I think that is probably a good candidate if you need super high performance network storage.
 

For me, Konsole is more than enough, with Terminator being a close second. I go back and forth.
On KDE I use Konsole, on Gnome I use Black Box because Gnome soydevs decided that their terminal would have zero customization.
 
NFS is fine. My main problem with it is that it hangs if it can't reach the server, rather than failing semi-gracefully like SMB does on Windows.


If you're sharing a block device and somehow mounting on multiple computers at once, via NBD or whatever, that's a very quick way to corrupt your files.
Ah see I never though of mounting multiple computers to a drive. If I ever do anything with a network drive I setup a completely separate pc with either proxmox or openmedia vault and let that handle any local cloud drive or anything I need.

Could you explain to me in tldr summary of exactly how that's even done and how baddly it corrupts the drive by doing all that? I'm little curious all things considered.
 
I have a question. i currently have a bare-metal seedbox that I want to migrate to being all Docker based, but I'm not sure what the level of complexity is that I have to look forward to in order to migrate a signed and managed version of nginx to a docker version with a gui like nginx proxy manager. should I just be able to write down the ports that nginx is proxying, uninstall it (and the panel home page), then install npm and organizr and move the configuration over and re-sign the keys?

and how does it handle it if some of my pages are http only (and I want to keep on the local network) and others support https? should i run two instances of organizr and npm and have npm set to forward requests from outside the network to https?
 
has anyone here done self hosted K8s? I want to try it as a learning project, but with minimal experience and documentation being niche I was hoping for some suggestions.
Kubernetes is very painful to learn due to the documentation being very reference-like and its guides not explaining anything besides hello worlds.
K3s is a common way to self-host k8s, it should come with all essentials.
I was also wondering if there's any issues with just mounting an old fashioned NFS share for my main storage repository. I read some docs recommending against it, saying it can be slow,
NFS itself is fine to use for dumb file storage. NFS is fast enough for most homelab usecases. Unless you have set out to do something incredibly autistic, you will not run into any speed issues.
K8s has good support for NFS.
SQLite on NFS is unstable due to developers often not supporting it properly. Expect very bad performance or errors when you run SQLite on NFS.
There are a lot of homelab oriented container images that do not support NFS due to 'quality of life' permission scripts that run on init.

Taking a generic Jellyfin instance as an example, your config folder which contains your database should be on a normal filesystem, while it is completely fine for your media to be on NFS.
So, a good way to have your mounts in this case would be something like
/config -> some iscsi disk on with ext4 fs so database doesn't complain
/my_media -> nfs share containing media
 
Last edited:
It's perfectly fine for connecting servers hardwired to each other tho right?
As long as you've moved off 10 Mbit coax, yes, it's fine. I've been using NFS for years for all my non-Windows file sharing at home. Large files, all the time.

NFS is fine. My main problem with it is that it hangs if it can't reach the server, rather than failing semi-gracefully like SMB does on Windows.
soft and intr are your friends. Not 100% friendly, but pretty nice.

Also, it's best if your file server doesn't crash, that's usually a problem on it's own.

I'm also a little confused where "NFS is Slow" comes from.
Over 90% of wire speed seems fine to me.
dd if=/mynfs/big-ass-13GB-file of=/dev/null bs=8192k status=progress
13875719088 bytes (14 GB, 13 GiB) copied, 12.5706 s, 1.1 GB/s

Small files can have more overhead than local but once you find the file transfer speeds are fine.
 
Last edited:
As long as you've moved off 10 Mbit coax, yes, it's fine. I've been using NFS for years for all my non-Windows file sharing at home. Large files, all the time.


soft and intr are your friends. Not 100% friendly, but pretty nice.

Also, it's best if your file server doesn't crash, that's usually a problem on it's own.

I'm also a little confused where "NFS is Slow" comes from.
Over 90% of wire speed seems fine to me.
dd if=/mynfs/big-ass-13GB-file of=/dev/null bs=8192k status=progress
13875719088 bytes (14 GB, 13 GiB) copied, 12.5706 s, 1.1 GB/s

Small files can have more overhead than local but once you find the file transfer speeds are fine.
One tuning thing with NFS is it tends to default to sync writes which SMB does not. That can cause slowness in some workloads. There are many fixes depending on your exact setup.
 
Sorry for double-post, but I also recently learned that ZFS has built-in samba support, likely it can do a ton of optimizations since it has access to the block devices on the hardware level. I think that is probably a good candidate if you need super high performance network storage.
that's not true. it's just an auto configurator for regular samba. zfs does not implement an smb server in their code. there is no performance benefit to be had.

ksmbd, however, is a candidate for more performant smb file hosting on linux. I am imminently (tomorrow) swapping out my server board for one with built-in 10gbe ports with rdma support, and I got a 10gbe rmda supported nic for my desktop. ksmbd claims to support multichannel as well as rdma for windows clients. might be able to get a hot 2GB/s.
 
it's always a good idea to try and push yourself and learn new things, however i would recommend following the docs as closely as you can while you're learning it.disuaded
don't get me wrong, trial by fire learning is a great way to learn new things. but for enterprise things you'll wanna learn how it's expected to be configured and installed before experimenting with it so you learn about any quirks before hand.
debian shouldn't be a problem but some enterprise software may only support ubuntu or redhat, if it supports ubuntu then it could work on debian but it's a gamble.
if the docs say to stay away from nfs, then you really should. I've never personally worked with nfs but I've heard anecdotes from old linux/unix admins that nfs can be a nightmare to work with.
even if you don't mind the speed penalty/slowness, speed doesn't always mean mbs a second. it could refer to response time, reliability during transfers or even IOPs. those can ruin a enterprise setup to the point of unreliability or nonfunctioning if they don't meet the requirements.
if you'someoing this for yourself then that's different but if it's for a business (real or fake for learning purposes) then you've gotta keep this in mind because it can cause massive issues.
I'd also recommend posting on the Level1Tech forum since they're more likely to help you understand and troubleshoot anything you come across. (even if the software isn't popular, someone on there most likely has used it at some point)
hope this helps.
Great advice generally, but I'm looking more for personal experience. K8s officially supports NFS and its not dissuaded in their official docs. I have seen some specialized solutions (Talos) recommending against it. However, proposed alternatives add additional complexity, such as using a Ceph cluster with high bandwidth, which seem highly excessive for my usecase. I've had a pretty good experience with NFS personally, but I'm also not running anything clustered and I could see how a more complex solution like k8s might not place as nicely. I just wanted to see if anyone has personal anecdotes for this kind of setup. Its pretty niche but can be an aspiration for enthusiasts. I will check w/ Level1 though - I like them quite a bit and even own one of Wendell's KVMs.
I can't speak to k8s, but I have recently been nix-pilled.

It makes me feel very powerful. I've been using Linux for 15 years, and it's sort of a similar feeling as when I first switched. I suspect this will give you the ability to do much the same thing as k8s, without the overhead of using crappy corpo software.

There is a nifty little deployment tool for managing clusters of instances: https://github.com/zhaofengli/colmena
Nix is definitely on my list of tools I want to try out, I really like the concept. I just haven't found a good use case yet and how other things prioritized.
While I tend to dislike alot of corpo software, k8s is a case where thats part of the appeal. Its a great resume builder and its not built around the worst aspects of corporate software, like licensing and support agreements. I could easily keep running most of my infrastructure on a single server running docker, I just want to improve my knowledge and experience.

I have a question. i currently have a bare-metal seedbox that I want to migrate to being all Docker based, but I'm not sure what the level of complexity is that I have to look forward to in order to migrate a signed and managed version of nginx to a docker version with a gui like nginx proxy manager. should I just be able to write down the ports that nginx is proxying, uninstall it (and the panel home page), then install npm and organizr and move the configuration over and re-sign the keys?

and how does it handle it if some of my pages are http only (and I want to keep on the local network) and others support https? should i run two instances of organizr and npm and have npm set to forward requests from outside the network to https?
Some of this will depend on your tooling, and I haven't used nginx proxy manager personally, but I run a setup thats very similar to this. I use traefik for my proxy and use wildcard certs there. It can be quite convenient because as long as the routes are correctly configured, I don't have to worry about which ports the containers use for http and https most of the time, and the SSL certs are handled automatically by traefik. This is a great tutorial if you're interested in this stack: https://www.youtube.com/watch?v=n1vOfdz5Nm8

For general advice I would remember that docker has its own internal network, so you'll need to create a docker network that holds your reverse proxy and the services you are routing it to. You will also be mapping ports form the host's to ports on the containers, so if you're managing these ports in npm (that abbreviation always fucks with me) that may not be necessary when moving to docker. Your ports should be documented in the dockerfile (or compose file, which I highly recommend), so its part of your main docker config instead of making these changes in a dedicated application. Remember that with docker you are running many tiny servers within the docker network on the host. While everything moves through the host, you aren't truly running a single server like you're used to. Your architecture will likely need to change a bit if you really want to take advantage of it. Also realize that its super quick to make additions and changes to your setup, and its isolated. Don't feel too afraid to play around and see what works, its one of the benefits of this design.

Kubernetes is very painful to learn due to the documentation being very reference-like and it's guides not explaining anything besides hello worlds.
K3s is a common way to self-host k8s, it should come with all essentials.

NFS itself is fine to use for dumb file storage. NFS is fast enough for most homelab usecases. Unless you have set out to do something incredibly autistic, you will not run into any speed issues.
K8s has good support for NFS.
SQLite on NFS is unstable due to developers often not supporting it properly. Expect very bad performance of errors when you run SQLite on NFS.
There are a lot of homelab oriented container images that do not support NFS due to 'quality of life' permission scripts that run on init.

Taking a generic Jellyfin instance as an example, your config folder which contains your database should be on a normal filesystem, while it is completely fine for your media to be on NFS.
So, a good way to have your mounts in this case would be something like
/config -> some iscsi disk on with ext4 fs so database doesn't complain
/my_media -> nfs share containing media
This is exactly what I was looking for. I was already leaning towards the k3s route, so I'll go with it. I was worried about general NFS issues with k8s so its good to here it has good support for these usecases. It seemed like any issues mainly affected larger more resource intensive deployments, but its hard to say for certain in these situations.
 
Some of this will depend on your tooling, and I haven't used nginx proxy manager personally, but I run a setup thats very similar to this. I use traefik for my proxy and use wildcard certs there. It can be quite convenient because as long as the routes are correctly configured, I don't have to worry about which ports the containers use for http and https most of the time, and the SSL certs are handled automatically by traefik. This is a great tutorial if you're interested in this stack: https://www.youtube.com/watch?v=n1vOfdz5Nm8

For general advice I would remember that docker has its own internal network, so you'll need to create a docker network that holds your reverse proxy and the services you are routing it to. You will also be mapping ports form the host's to ports on the containers, so if you're managing these ports in npm (that abbreviation always fucks with me) that may not be necessary when moving to docker. Your ports should be documented in the dockerfile (or compose file, which I highly recommend), so its part of your main docker config instead of making these changes in a dedicated application. Remember that with docker you are running many tiny servers within the docker network on the host. While everything moves through the host, you aren't truly running a single server like you're used to. Your architecture will likely need to change a bit if you really want to take advantage of it. Also realize that its super quick to make additions and changes to your setup, and its isolated. Don't feel too afraid to play around and see what works, its one of the benefits of this design.
thanks, I'm really looking forward to the change, there's a few containers that would make things better for my server. I just don't know if I have time to make the switch before I go for my trip.

one question tho, for a docker gui is it better to use Portainer, Yacht, Rancher, or something else?
 
thanks, I'm really looking forward to the change, there's a few containers that would make things better for my server. I just don't know if I have time to make the switch before I go for my trip.

one question tho, for a docker gui is it better to use Portainer, Yacht, Rancher, or something else?
I use Portainer, but it realistically ends up being a utility for monitoring more than configuring in my experience. Docker heavily benefits from the ability to reproduce your configs and share them with people working on similar projects, which is lost when you manage it with Portainer. Use Portainer to obtain some familiarity, and then move towards using compose files as you're more comfortable with the ecosystem. Also, if you are on Ubuntu avoid using the snap version of Docker. Some documentation will push you towards it, and the permissions will not work properly with Portainer. Its a general headache and you'll likely end up having to waste time trying to fuck around with snap and its permission scheme instead of actually working with Docker and the functionality it gives you.
 
I use Portainer, but it realistically ends up being a utility for monitoring more than configuring in my experience. Docker heavily benefits from the ability to reproduce your configs and share them with people working on similar projects, which is lost when you manage it with Portainer. Use Portainer to obtain some familiarity, and then move towards using compose files as you're more comfortable with the ecosystem. Also, if you are on Ubuntu avoid using the snap version of Docker. Some documentation will push you towards it, and the permissions will not work properly with Portainer. Its a general headache and you'll likely end up having to waste time trying to fuck around with snap and its permission scheme instead of actually working with Docker and the functionality it gives you.
My server is Debian so snaps aren't an issue, I think I only really want the Gui to start stop and maybe delete containers, and to check what the ports and mount points are mapped to.
 
I've nearly finished the migration to docker, but there's a step I'm missing.

I'm trying to proxy qbittorrent ith nginx proxy manager, but it doesn't work when accessed from /qbittorrent. if i access it from the port number it's fine, but because I can't set a base url it doesn't like it when /qbittorrent/ is added to the url.
 
I'm trying to proxy qbittorrent ith nginx proxy manager, but it doesn't work when accessed from /qbittorrent. if i access it from the port number it's fine, but because I can't set a base url it doesn't like it when /qbittorrent/ is added to the url.
Can't you proxy localhost/qbittorrent to localhost:port?
 
I've nearly finished the migration to docker, but there's a step I'm missing.

I'm trying to proxy qbittorrent ith nginx proxy manager, but it doesn't work when accessed from /qbittorrent. if i access it from the port number it's fine, but because I can't set a base url it doesn't like it when /qbittorrent/ is added to the url.

You'll likely have to experiment because the docs appear to be garbage. It might require some custom config.
Some indicate simply slapping a / on the "Hostname/IP" will work(so: localhost/ ), others say custom nginx config is needed like: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/3512#issuecomment-1954201567
 
Any suggestions for a good used ~$600 laptop? I'm on my third t480 and I think it may be time to move on. An iGPU would be nice for the occasional game.
 
Back