The Linux Thread - The Autist's OS of Choice

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
Someone in here a few pages back suggested doing a Debian install with no DE and setting up Openbox then ricing it into something useful. Finally did this just now. Was a ton of fun, and I am a bit shocked at how fast my 10 y/o laptop boots with it so very useful as well. It's apparently largely trivial to turn this WM into a desktop environment, didn't really take long at all and the system is super snappy. With tint2 and PCmanFM you basically have a 'shell' ready to go. Panels, desktop icons, systemtray etc. I just installed packages that go into the system tray as you do on DEs. After I added them to the conf they just appeared there as normal with no need for configuration on the panel itself. There's even GUI programs to change the panels around and add launchers if you want which I did. I went with jgmenu for the "Start menu", a simple little menu that I only customized with shutdown, restart and log out buttons. I got myself a nice little Winlite system that boots in seconds thanks to the thread, ty!

The only thing that turned into a wrinkle was my Kitty terminal launching slow as shit, I did a strace on it to see what's up and found that python tries to bytecode cache into the system folder which it does not have permissions to do. This led to it asking like hundreds of times before giving up after 3 seconds. I ended up just redirecting it to a cache in the home directory instead of messing about with permissions but maybe giving permissions is a better solution? I am new so I am not sure. That was the only legit issue, but there was also having to add yourself to sudoers which Debian insists on not doing during install. I say legit, because I genuinely don't know if not doing this is a security measure or if the minimal installer is just broken. This also didn't happen this time but it did before: I have a habit of doing a wipefs before installing Debian onto any drive because the last time I installed Debian the installer left a GRUB bootloader from another distro I hopped from and it didn't even boot as a result. So some heads up about all of that if anyone else does this. Would strongly recommend, it's a good learning experience if you are new as well.
 
Today's "It's easier with Linux".

My main desktop as I mentioned the other day I added some stuff to, the other thing I added was a new, larger NVMe, but due to the boot fun I decided not to move everything over just yet. I decided today was the day.
The problem is I had LVM for / and /home and wanted to be rid of them, so usually I'd dd the partitions to the new drive. This time it was a bit more fun.

1. Make new partitions on new drive.
2. mkfs.fat(EFI) mkfs.ext4 and mkswap as appropriate.
3. Mount new root and rsync / and /home to it.
4. Reboot into live USB
5. mount new / and /boot/efi into temp dir.
6. Do one last rsync, including the EFI directory this time
7. Edit new /etc/fstab with proper UUIDs
8. Chroot into new root and run update-initramfs
9. Run update-grub. Which fails.
10. Bind mount /dev /sys and /proc into chroot.
11. Chroot back in and run update-grub
12. Check /boot/efi/EFI/debian/grub.cfg and notice it's still looking for LVM, fix the UUIDs there.
13. Check /boot/grub/grub.cfg and see it is correct.
14. Reboot
15. System doesn't boot
16. Boot back into live USB. Mount all the filesystems including /dev /sys and /proc
17. Run update-initramfs again and see the normal errors.
18. Reboot
19. Profit
20. Wipe old EFI partition to make sure the system doesn't try and boot the old drive.
21. Once all is confirmed, wipe now second drive and mount it somewhere for scratch space.

Windows, of course, would have been two steps:
1. Reinstall Windows on new drive.
2. Copy over any important files.
 
The only thing that turned into a wrinkle was my Kitty terminal launching slow as shit, I did a strace on it to see what's up and found that python tries to bytecode cache into the system folder which it does not have permissions to do. This led to it asking like hundreds of times before giving up after 3 seconds. I ended up just redirecting it to a cache in the home directory instead of messing about with permissions but maybe giving permissions is a better solution? I am new so I am not sure.
Sounds weird. I have no idea about Kitty. Do you have it starting up with a session already starting a sudo'd shell or something maybe?
That was the only legit issue, but there was also having to add yourself to sudoers which Debian insists on not doing during install. I say legit, because I genuinely don't know if not doing this is a security measure or if the minimal installer is just broken.
Your first login user will be setup as a member of the group if you don't set a root password. Otherwise it sticks to the way it has always been.
 
Sounds weird. I have no idea about Kitty. Do you have it starting up with a session already starting a sudo'd shell or something maybe?
I did check but no, I also checked if the shell (or really anything) was running as root and for privileges, but it doesn't appear so unless I am mistaken. Pointing it to cache in home instead seems to have fixed the problem, though.

Your first login user will be setup as a member of the group if you don't set a root password. Otherwise it sticks to the way it has always been.
That's most likely it.
 
Today's "It's easier with Linux".

My main desktop as I mentioned the other day I added some stuff to, the other thing I added was a new, larger NVMe, but due to the boot fun I decided not to move everything over just yet. I decided today was the day.
The problem is I had LVM for / and /home and wanted to be rid of them, so usually I'd dd the partitions to the new drive. This time it was a bit more fun.

1. Make new partitions on new drive.
2. mkfs.fat(EFI) mkfs.ext4 and mkswap as appropriate.
3. Mount new root and rsync / and /home to it.
4. Reboot into live USB
5. mount new / and /boot/efi into temp dir.
6. Do one last rsync, including the EFI directory this time
7. Edit new /etc/fstab with proper UUIDs
8. Chroot into new root and run update-initramfs
9. Run update-grub. Which fails.
10. Bind mount /dev /sys and /proc into chroot.
11. Chroot back in and run update-grub
12. Check /boot/efi/EFI/debian/grub.cfg and notice it's still looking for LVM, fix the UUIDs there.
13. Check /boot/grub/grub.cfg and see it is correct.
14. Reboot
15. System doesn't boot
16. Boot back into live USB. Mount all the filesystems including /dev /sys and /proc
17. Run update-initramfs again and see the normal errors.
18. Reboot
19. Profit
20. Wipe old EFI partition to make sure the system doesn't try and boot the old drive.
21. Once all is confirmed, wipe now second drive and mount it somewhere for scratch space.

Windows, of course, would have been two steps:
1. Reinstall Windows on new drive.
2. Copy over any important files.
you can do the same thing on linux.

That's actually probably what I would have done lol. Just installed whatever distro. copied the files.

I've done it before actually. I've also done it with bsd.

To me, being able to do it the way I would have, but also having the option to go the route you did is the great thing about it. You have so much control, and so many options.
 
Windows, of course, would have been two steps:
1. Reinstall Windows on new drive.
2. Copy over any important files.
You can also image a Windows install to a different computer with a new drive, I have done it before.
It can get fucky if you do it with a new machine with totally new hardware, but modern Windows is surprisingly resilient to hardware changes, and outside of something as drastic as going from Intel to AMD and vice versa can at least get you into the OS to the point where you can get new drivers. You can always install some drivers manually to ensure it starts up on different hardware.
 
Today's "It's easier with Linux".

My main desktop as I mentioned the other day I added some stuff to, the other thing I added was a new, larger NVMe, but due to the boot fun I decided not to move everything over just yet. I decided today was the day.
The problem is I had LVM for / and /home and wanted to be rid of them, so usually I'd dd the partitions to the new drive. This time it was a bit more fun.

1. Make new partitions on new drive.
2. mkfs.fat(EFI) mkfs.ext4 and mkswap as appropriate.
3. Mount new root and rsync / and /home to it.
4. Reboot into live USB
5. mount new / and /boot/efi into temp dir.
6. Do one last rsync, including the EFI directory this time
7. Edit new /etc/fstab with proper UUIDs
8. Chroot into new root and run update-initramfs
9. Run update-grub. Which fails.
10. Bind mount /dev /sys and /proc into chroot.
11. Chroot back in and run update-grub
12. Check /boot/efi/EFI/debian/grub.cfg and notice it's still looking for LVM, fix the UUIDs there.
13. Check /boot/grub/grub.cfg and see it is correct.
14. Reboot
15. System doesn't boot
16. Boot back into live USB. Mount all the filesystems including /dev /sys and /proc
17. Run update-initramfs again and see the normal errors.
18. Reboot
19. Profit
20. Wipe old EFI partition to make sure the system doesn't try and boot the old drive.
21. Once all is confirmed, wipe now second drive and mount it somewhere for scratch space.

Windows, of course, would have been two steps:
1. Reinstall Windows on new drive.
2. Copy over any important files.
Couldn’t you have just made a backup of /home/, partitioned the drive with a copy of gparted on the Live USB and done a full reinstall with the GUI installer like a normal person?
 
Fresh format of Arch. Why the fuck is it such a headache to install ProtonVPN (WG) with a killswitch without it making Networkmanager sperg out about DNS resolutions. Like fuck me, I just want a bare bones nftables ruleset that's overridden by the VPN anyway because it'll always be on.

I just want to seed torrents with open ports without paranoia.

I don't know if I want to use Arch btw
 
Couldn’t you have just made a backup of /home/, partitioned the drive with a copy of gparted on the Live USB and done a full reinstall with the GUI installer like a normal person?
Sure, would have taken significantly longer as that would pull packages from the network. I'd have to make a package list to make sure everything gets installed and go find all the other non /home customizations. I think it was just network and Munin setup, but I'd probably forget a bunch of other stuff.

It took me as long to type out the steps as to do them. Especially as the initial rsync was done while I did other stuff and the follow-up only had to copy a few files.
You can also image a Windows install to a different computer with a new drive, I have done it before.
It can get fucky if you do it with a new machine with totally new hardware, but modern Windows is surprisingly resilient to hardware changes, and outside of something as drastic as going from Intel to AMD and vice versa can at least get you into the OS to the point where you can get new drivers. You can always install some drivers manually to ensure it starts up on different hardware.
I've got about a 20% success rate of doing that and Windows not blowing up. Most of the time I have to go in and run the startup fixes either in the GUI or command line. And then about 20% of the time that all fails too and I end up blowing it all away anyway.
 

Fun fact: the OpenBSD project abandoned sudo in favour of doas. This was back in November 2015 with the release of OpenBSD 5.8. Biggest reason why they did so was because of perceived vulnerabilities with sudo, and to simplify the absolute mess that was /etc/sudoers. I love how Canonical Ltd is taking it upon themselves to reinvent sudo (but in Rust™️) instead of, oh I don’t know… ADOPTING A MATURE, WELL-ATTESTED PIECE OF SOFTWARE WRITTEN BY THE SAME PEOPLE WHO MADE OPENSSH & LIBRESSL POST-HEARTBLEED. I hope their Rust rewrite of sudo immolates itself upon first being distributed. Fucking Shuttleworth and his insistence on arbitrary, meaningless design choices for the sake of marketplace differentiation.
 
Last edited:
Does anyone here have any experience with "copying" a Gentoo installation from one machine to another, including to and from x86 and ARM architectures? Right now I have one "main" installation with all the packages I reasonably need, and I was thinking of compiling my own binaries to just port and install to my other computers, including a Pi 5 & Radxa Rock B+. AFAIK Gentoo does support ARM stuff, though I am not exactly sure if x86 packages will function on ARM.
 
Does anyone here have any experience with "copying" a Gentoo installation from one machine to another, including to and from x86 and ARM architectures? Right now I have one "main" installation with all the packages I reasonably need, and I was thinking of compiling my own binaries to just port and install to my other computers, including a Pi 5 & Radxa Rock B+. AFAIK Gentoo does support ARM stuff, though I am not exactly sure if x86 packages will function on ARM.
Learn crossdev: https://wiki.gentoo.org/wiki/Crossdev#Build_packages_with_crossdev

It's the Gentoo way to do Gentoo on potatoes.
 
New troon-coded Linux mascot just dropped (archived), this time for Ultramarine Linux:
mizuki-1.webp
 
Last edited by a moderator:
They're the Microshit of open source.

Not quite. There is no one singular company in the world of FOSS that can easily claim "M$ of open source." Microsoft is unique in its total dominance of both consumer and enterprise computing markets. They might not be staring down the barrel of obliteration by antitrust litigation anymore, but they're still an effective monopoly in plain sight. In fact, probably more so nowadays than they were at the turn of the millennium. We're not just talking about mere operating systems, but also their presence in cloud infrastructure. It's no contest, there is no singular company that can ever match Microsoft. Multiple companies can vie for the title, but you'd have a minimum of two emperors: one of enterprises and one of home consumers.

I bring this up to say that in terms of Enterprise Linux vendors, Canonical is nowhere close to being Microsoft. They're definitely a major player, I will admit I'm unaware of their server or enterprise offerings at the moment, so I won't speak on that. On the other hand, Red Hat is by far the most equivalent to Microsoft than Canonical is... in the enterprise sector. Red Hat was the most successful of the various Linux companies from the 90s, they effectively pioneered a method to monetise open source software, and they set the stage for Linux more broadly to proliferate behind the scenes. Their sway in the enterprise Linux market is so ridiculous that Oracle, a direct competitor to Red Hat and even Microsoft in many areas, have their own RHEL clone complete with upgrade paths from CentOS 7.

In terms of home consumer vendors, Canonical absolutely does earn that title... with a couple of asterisks attached. In the past, Red Hat had the one Linux distro from 1995 all the way through 2003. There is no "Red Hat Linux" after Red Hat Linux 9 launched in March 2003.* Instead, they launched Red Hat Enterprise Linux for their growing enterprise sector, while offering Fedora Core which eventually morphed into Fedora. Fedora is quite literally a testbed for future releases of Red Hat Enterprise Linux. It's not quite rolling release like Arch or openSUSE Tumbleweed, but it's about as close as you can get while still maintaining a release cycle. You can't stick with a single Fedora version for more than a year before you gotta upgrade. It's great for the "hobbyists" among us, but not so much for normies and the less autistic enthusiasts among us who are creatures of habit and prefer steadier releases. Then enters Ubuntu, stewarded by Canonical Ltd.

Canonical started off as the brainchild of South Africa's very own Mark Shuttleworth. He had a fairly modest career when he broke out into the world of IT and Linux in the 1990s. He even founded a certificate authority in 1995 that eventually got bought out by VeriSign in 1999 to the tune of $575 million (an eye-watering $1.1 billion adjusted for inflation in May 2025), Flush with unfathomable amounts of cash, tons of ideas, variable amounts of motivation, a large amount of community support, and a propensity to rein in his ambition whenever he bit off more than he can chew, Shuttleworth wants to get in on the burgeoning Linux sector, and Red Hat exiting the home market gives him a delightfully devilish idea on how to enter.

Red Hat is the corporation that ultimately provides for its communities, for Fedora, RHEL, and all of their clones alike. Shuttleworth did the opposite; he took Debian, a monumental distribution known for being the one of the largest volunteer-run Linux distributions in the world, and repackaged that for home consumers. Home consumers that would have been Red Hat customers if they hadn't left the market. Sure, there were Linux companies like Xandros and Linspire (formerly Lindows) that tried to market Debian to the home consumer while never quite hitting the mark. If you ever spent an inordinate amount of time in the computer section at your local library and picked up one of those "Linux Made Easy" type of books, Xandros was one of the distros they used in the free distribution CDs for those books.

Mark basically saw what tons of his contemporaries did and avoided their mistakes like the plague. Canonical is effectively a commercial body that provides enterprise-level support to home and enterprise customers alike. Canonical effectively incentivises their users to track LTS releases, and they got first 3, then 5, and now 10 years of official support. Sure, there are some forms of paywall but really... we shouldn't look a gift horse in the mouth like that considering how RHEL has a 10 year support cycle too, but that's limited to RHEL clones and so much stuff for home consumers is entrenched in the Ubuntu space nowadays. Canonical seems fairly profitable on its own merits, but let's not forget that Shuttleworth wasn't above using his own personal wealth to fund the creation and distribution of install media during Ubuntu's infancy to spread the word quicker. He did a ton of stuff no other profit-motivated company would ever dream of doing because he was independently wealthy. We can't forget that.

Canonical, however, has such a lopsided grasp over the home consumer market to the point where there's an entire family tree of operating systems that stem from Ubuntu, which itself is one substantial branch of the much larger Debian family tree. Even if you detest Canonical's decisions or Shuttleworth's statements, you can't really escape Canonical since they're the ones maintaining the update repositories for derivatives like Linux Mint and Zorin OS. Sure, Clem and the rest of the Mint team have done remarkable work polishing up and refining Ubuntu over the 15+ years they've been tracking releases. They still can't stop Canonical from doing something huge like dropping 32-bit x86 support, nor is there any real incentive among the team to maintain a 32-bit fork of a now 64-bit only operating system.

* Fun fact: official support terminated in April 2004, and there was a project called Fedora Legacy that provided community-maintained backports and patches for older Fedora Core versions, and Red Hat Linux 9 until late 2006/early 2007.
 
Back
Top Bottom