The Linux Thread - The Autist's OS of Choice

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
What I'm struggling with is the business of separating the / and /home partitions. Trying to search I find lots of talk about how it's good to do because you can reinstall or switch distros and you keep all your personal stuff, but I'm not even sure what "personal stuff" entails. If I think of it in Windows it was stuff like the user folders (Documents, Pictures, etc) which I never used so I didn't care.
Where Linux differs from Windows (and I'm assuming Arch is like Debian in this) is that configuration files for applications are stored in /home/your-username by convention. On Windows custom settings for applications could be stored in the application's folder, a random directory in your user folder or, God forbid, buried in the registry.

By putting /home on a separate partition, after reinstalling the OS and applications all the settings from the old install will still be there. This includes UI settings like themes. If an application conforms to the XDG Base Directory Specification, the configuration files will be in /home/your-username/.config by default.
 
You're right, I should have chosen LFS.
keak1b9j4sl81.jpg
She will never let you down.
 
That's like learning how to walk by climbing Mount Everest but you do you.
My first linux installation was I wanna say red hat linux? Or Debian? Mandrake? At any rate, the installer failed at some point because of my IDE controller not being properly recognized (yes this was a while ago) and the error messages that ended in a kernel panic were absolutely cryptic to me, I had no idea what was going on. This was a different time so simple solutions weren't a google away. I saw the superb gentoo installation handbook and gentoo's promise that you actually will have control over every aspect and installed Gentoo instead, which worked first install as I got to configure my kernel myself. So I started off with gentoo. Gentoo at that time had an amazing installation handbook that was way beyond any documentation other distributions offered (mostly it also told you the "why" not only the "how") and even as completely inexperienced user I got farther with it because it actually explained everything to me, all I needed to do was read.

It's all the same software anyways and I'd argue that distributions who make you take the helm are more useful in learning and actually becoming a linux user than distributions where distro jannies decide everything for you. When then inevitably something breaks or not works as expected the experience is worse and more jarring as the new user expected smooth sailing and a lot of these Windows-for-poor-people distributions and their communities are not really forthcoming with technical solutions beyond "turn it off and on again" as they themselves never learned anything beyond "trust the distro janny to do everything for you".

But that's just like, my opinion, man.
 
@Friendly Primarina I don't know if it's because I've had a good night's rest, but you're making me think I ought to do it. Maybe I should do it just because it's "best practice." I wouldn't know how much space I'd need to allocate to either though. Regarding the hassle of resizing partitions as mentioned by @Solid Hyrax , that's something I could handle with either LVM or btrfs, probably? Can I use btfrs' subvolumes to make a /home subvolume that would survive reinstalls, or am I better off using LVM to keep /home out of the same partition as / ?
 
I wouldn't know how much space I'd need to allocate to either though.
FWIW my root partition (the "/") is about 60GiB. It's at roughly 50% usage after 4 1/2 years. That's with heavy, but not "professional," usage. Maybe if your setup allows, you could have your root partition on a separate drive entirely? Perhaps assign the remaining space on the storage device with the root partition to something disposable or easily remade, like WINE prefixes?
 
Anything I cared enough to save - photos, projects, movies, etc. - never touched my C drive unless maybe it hit my Downloads folder before being moved.
If you already have a folder structure, keep things as it is and learn how symbolic links work. You can keep your existing structure and /home doesn't have to move anywhere.
Something that might help me decide if I want to bother with it is to know where stuff gets installed. If I installed an app through pacman, where does it end up? I see some info about it going to /usr/bin or /usr/sbin, so I need to be sure to allocate an appropriate amount of space to root for any packages I want to install, correct?
@davids877 and @Friendly Primarina already covered this, so I'll add on.

User data for Flatpaks are found in $HOME/.var/app. AppImage stores data in various folders around $HOME, as will packages installed using pacman. To understand more, learn about $XDG_DATA_DIRS, $XDG_CONFIG_DIRS, and other $XDG_* vars.

Flatpaks can be installed system-wide or just for your user. If they are system-wide, you find them in /var/lib/flatpak, if they are local, you find them in $HOME/.local/share/flatpak. I wouldn't reccommend going down the container rabbit hole just yet.
 
I somehow managed to install KDE Plasma without a browser, but I'm up and running. I ended up using LVM so that I can resize. And thank you all for the help, because you were able to tell me where packages get installed I was able to get my keepass plugin set up so I could access my password manager. Very cool.
keep things as it is and learn how symbolic links work
I haven't explored symlinks in Linux yet. I used them in Windows quite a bit, but can Linux symlinks be set to point at network storage? It's not even iscsi, right now it's just SMB. Part of my prep for this was to set up a Windows machine that hosts network storage and allows me to continue my backups to BackBlaze. (Are they going to notice there's a Windows 11 desktop with 60tb of storage?)
 
I haven't explored symlinks in Linux yet. I used them in Windows quite a bit, but can Linux symlinks be set to point at network storage? It's not even iscsi, right now it's just SMB. Part of my prep for this was to set up a Windows machine that hosts network storage and allows me to continue my backups to BackBlaze. (Are they going to notice there's a Windows 11 desktop with 60tb of storage?)
You can symlink from anything to anything. As long as it's in the file system and you can create files, you can do links.*

* There are probably a handful of exceptions but fuck off
 
There are probably a handful of exceptions but fuck off
Not really any exceptions. A symlink is just a pointer to another path, works for devices, sockets, pipes, directories, sysfs and procfs, across filesystems and even to objects that don't even exist.

This, in some ways, is how they can be a security risk. If you can get root to write a file in a directory you can make a symlink in then you can do stuff like link the file to /etc/shadow and make all the password hashes go away.
 
They can point anywhere on any mounted file system. Network storage needs to be mounted first, which likely already solves the problem you're looking at, as you can mount a filesystem - and so a shared drive - anywhere you like.
They can point to places that don't even exist yet as well, you just need to be sure to get it set up before using it. You can put off mounting things for eternity until you actually use that symlink.
 
The fuck even IS a flatpak anyway? I see that name a lot, and yeah somehow I "sensed" I should prefer a package installation when I have the option but for some things, I don't.

.......

Still testing my controllers but I won't really know until I use them during a stream if my issues were just bluetooth related or if somehow streaming was causing it (I completely forgot that was a possibility).
Back in the day when a dodgy company wanted to distribute a badly coded proprietary application that would work (badly) across very different Linux distributions, you would distribute a 'static binary', which had all the various libraries that it depended on built into the one executable.

This mean that when you ran these applications, they would use the in-built copies of the libraries that were compiled in. This meant that your memory consumption on a tight system could be massively blown out as every library that was used by both the bad application and other things on your system would effectively be loaded twice. On the 'positive' side, it meant that a badly coded application that depended on undocumented/unstable functions in a library would continue to work.

This approach has returned, partially because large amounts of memory in consumer devices have made programmers more open to not caring about memory consumption, partially because of general laziness.

My understanding is that the main reason for doing this 'flatpak'/'snap'/whatever other bullshit, as opposed to traditional static binaries, is that there is a lot more enforcement of free software licensing than there used to be, and the license terms around static linking are also a lot tighter than they used to be in the older versions of the GPL/LGPL. Back in the day Netscape or Corel would just compile in GPL/LGPL license components into their web browser or word processors and figure that noone would call them on it. By having these bullshit 'packages' that unpack themselves or have some dodgy systemd-linked component like snapd unpack them, (((technically))) it's not a single binary executable and the copies of the libraries that are separately included in every single package are just being separately loaded into your system by some separate loader executable. To quote Richard Stallman, honorable among the Hebrews:
That's a bad thing. That's a bad thing. It's a foolish thing. It's hard to trust these snaps and flatpaks. And not only that, but those platforms distribute non-free software, so it's a bad idea to point to them at all. And in addition, it means that there aren't multiple -- you know with with distributions, as distributions package a program they will look at the program and thus they can fix things, if they see anything bad they can change it. And thus, this is part of how users collectively maintain their control. I've never installed a snap or a flatpak. And I don't think I want to. I wouldn't. I don't trust it. How do I know whether that flatpak includes some non-free software. How could I check? I don't think they're designed to let people check. They're not designed for anyone to be able to build the program. As far as I know, I could be mistaken but if all everybody does is just install the binaries, in the flatpak. Nobody's building it, how does anybody know if the complete source is available.
Hopefully, Stallman can resume full dictatorial control of the GNU Project and make arbitrary and capricious changes to the licenses to make the practice of snapflatting completely prohibited for anything that references a GPL or LGPL library.
 
This approach has returned, partially because large amounts of memory in consumer devices have made programmers more open to not caring about memory consumption, partially because of general laziness.
Don't forget the persistence of early-'90s-style DLL Hell in Linux, without even the "current directory is first in the library search path" hack-around.
 
Programmers nowadays work on so many levels of abstractions, the usual given excuse is that that is because they're very busy people who just can't and shouldn't be *bothered*, but in honesty, it's usually just simple lack of knowledge. Lack of knowledge how the parts come together. Lack of knowledge how a thing works. I often stumble across random popular and semi-popular software and think to myself "Why does this abstraction layer exist, didn't they know that you simply can..." and the answer is usually simply no. They didn't. And they don't care to learn. I blame hustle culture where quantity always trumps quality. When AI gets semi competent and starts hallucinating less and producing more somewhat workable code, this will get even worse. Most programmers already don't know what half their program does because of dozens and dozens of pulled in third party code. Now the AI will also write half of the actual code.
 
Programmers nowadays work on so many levels of abstractions, the usual given excuse is that that is because they're very busy people who just can't and shouldn't be *bothered*, but in honesty, it's usually just simple lack of knowledge. Lack of knowledge how the parts come together. Lack of knowledge how a thing works. I often stumble across random popular and semi-popular software and think to myself "Why does this abstraction layer exist, didn't they know that you simply can..." and the answer is usually simply no. They didn't. And they don't care to learn. I blame hustle culture where quantity always trumps quality. When AI gets semi competent and starts hallucinating less and producing more somewhat workable code, this will get even worse. Most programmers already don't know what half their program does because of dozens and dozens of pulled in third party code. Now the AI will also write half of the actual code.
There are so many times I look for an example and find: Load this layer, this library, this other thing and then use this 3 line snippet. Or, maybe I just do it myself and use 4 lines instead. I avoid touching JavaScript, but every time I need to do something trivial it's always "Load React"

I'm luckily not a programmer, but if I was I think I'd stick to microcontrollers where speed and efficiency still matter. Although maybe not as much with the parts like the RP2040. Even there I ran into some of the "helper" stuff for multi-threading like queues being really slow. So I just pretended I knew what I was doing and used shared memory instead.

Damn it, I once bitbanged 9600 bps on a Microchip PIC interleaved with A/D conversion and math.... and I liked it.
 
Last edited:
Programmers nowadays work on so many levels of abstractions, the usual given excuse is that that is because they're very busy people who just can't and shouldn't be *bothered*, but in honesty, it's usually just simple lack of knowledge. Lack of knowledge how the parts come together. Lack of knowledge how a thing works. I often stumble across random popular and semi-popular software and think to myself "Why does this abstraction layer exist, didn't they know that you simply can..." and the answer is usually simply no. They didn't. And they don't care to learn. I blame hustle culture where quantity always trumps quality. When AI gets semi competent and starts hallucinating less and producing more somewhat workable code, this will get even worse. Most programmers already don't know what half their program does because of dozens and dozens of pulled in third party code. Now the AI will also write half of the actual code.
For some languages. Rust yes, Javascript very yes, C and C++ not so much.
 
For any language that has a "package manager", it is true. The best way to prevent that-even with competent developers-is to have a well-maintained standard library.
I fucking hate language package managers.
USE THE MOTHERFUCKING SYSTEM MANAGER YOU FUCKING NIGGERS
thank you
 
Case in point, Anki, the flashcard program that's recently been carcinized. Flashcard programs aren't that complicated, right? Surely it just needs Qt?


Nope. It pulls in 640 crates, doing God only knows what. Utterly ridiculous.
It's like the Rust people spent all their time obsessing about memory safety, but none at all worrying about supply chain attacks.
 
Back