The Linux Thread - The Autist's OS of Choice

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
I was fiddling with my nvidia drivers and it broke again in the past few days. Granted I have a laptop usecase where I want laptop suspend (power management not an issue on desktop) and dual monitors to work. I managed to fix everything by completely purging all nvidia drivers and I guess CUDA which I installed from nvidia's ppa before and was probably fucking up things, then reinstalling the ubuntu recommended drivers. For how unholy GNOME is with nvidia, the situation is probably worse without community recommended drivers and package maintainers. Wayland is still completely borked for me (just black screen on login).
 
  • Horrifying
Reactions: Dreamland
I was fiddling with my nvidia drivers and it broke again in the past few days. Granted I have a laptop usecase where I want laptop suspend (power management not an issue on desktop) and dual monitors to work. I managed to fix everything by completely purging all nvidia drivers and I guess CUDA which I installed from nvidia's ppa before and was probably fucking up things, then reinstalling the ubuntu recommended drivers. For how unholy GNOME is with nvidia, the situation is probably worse without community recommended drivers and package maintainers. Wayland is still completely borked for me (just black screen on login).
I generally keep pretty bare bones installations with few deviations from default install options, so I rarely have any trouble just nuking the partition and reinstalling for things like this, then spending a couple minutes changing the few things I bother with. Nvidia drivers are shit and I'm not going to spend a whole day twiddling with them. By comparison, the detection in the default install almost always hooks me up with the right stuff. Then adding CUDA is usually a snap, especially compared to the absolute nonsense you have to go through to install it in Windows.
 
I just used Timeshift to save a FUBAR system. Remember your backups, kiwis!
Why copy if you can remount? That's the whole point of partitioning after all.

Copying while tedious is better imho for minimalism and compatibility. If you only copy what you need you skip a lot you don't, especially as you start to accumulate system generated files for different DEs that are often useless and sometimes even harmful. It also lowers the cognitive load by maximizing the signal:noise ratio of your personal files and thereby reducing your own cognitive load.
 
And, for better or worse, have to implement probably all of the VGA standard from 1987.
Yes, never said otherwise, you can even still get text mode on a modern graphics card. It's just a very non-optimal way of running them and you'd go through a lot more layers than you'd have to go in something like DOS on the period appropriate hardware and that's surprisingly costly, even for fast hardware. Linux even has a framebuffer you can read from and write directly to, it's a character device (e.g. cat /dev/urandom > /dev/fb* or cat /dev/fb* > /tmp/screenshot works just fine) just don't expect it to be fast as it's an abstraction that does some strange things, at least on your average amd64 hardware.

The A20 line is an interesting example of how the PC compatibles were basically a heap of kludges from the get go. It's not an elegant platform, especially compared to what also was on the market around the time they showed up. Eventually, they were just the most bang for the buck and at the end of the day, that was all that really mattered. Even I dropped my Amiga for promises of SVGA (super quick via VESA local bus!) and a 32-bit 50 Mhz CPU for an affordable (for the time) price.

FreeBSD got this right.
FreeBSD gets a lot right. I also read the devs made a conscious effort to remove perl out of it's entire tooling. Alone for that they deserve mad props. Perl makes me sick to my ass.

I just used Timeshift to save a FUBAR system. Remember your backups, kiwis!
btrfs is great. Usually I avoid overtly complicated fast moving things like btrfs, also btrfs had a bad reputation it didn't quite manage to shake but I love it, especially since it's checksumming feature pointed me to a hardware problem I would've never ever noticed with a different filesystem and which had the capability to silently corrupt tons of files. (backups wouldn't have saved me here, this silent corruption would've just creeped into my backups too) I feel some of it's bad rep is also undeserved because my googling about "btrfs just ate all my files for no reason!" hinted strongly at btrfs just being the messenger regarding not correctly working hardware. A default ext4 shows no problems and lets your files silently die. Then come the weird, occasional problems. By the time you investigate because shit stops working, the damage usually is already done. More common than you think!

With linux namespaces and something like btrfs you can basically fork your running system without big overhead and try out something in that new namespace snapshot while your system just chugs along. (basically just the divergent data really takes up HDD space and there's barely any overhead to the namespaces) with mount namespaces alone you could also just put all the changes transparently in a tmpfs in RAM and just dump it all (or parts of it) when whatever program was running is done without ever writing anything to the drive. I've used this to e.g. build software with big and finnicky lists of build time dependencies, just to basically keep the binary package and discard the dependencies and all the other clutter without it polluting my proper file system. Too many package managers just don't care. Of course namespaces are also cool in the context of security and not hard to use at all, you don't need stuff like docker and whatever fancy tools that might also exist which do this all automatically. I often find such tools not only add to software bloat, but also to strange learning curves learning some particular tool that might not be the hot thing anymore three years down the road because of some tranny meltdown when all you really ever needed is a few sh scripts and the linux kernel.
 
More FreeBSD sperging: I thought I'd have to go through a nightmare to copy my ROMs and ISOs over from my internal HDD (which is formatted as NTFS) to my FreeBSD installation (which is formatted as ZFS) but no! I'm able to mount, drag, and drop all my stuff through Dolphin without any issue! This is something that I couldn't even accomplish on my current Linux Mint installation. All it took was adding fusefs_load="YES" to /boot/loader.conf for it to function, which I took care of shortly after booting out of bsdinstall.

As far as my general thoughts on FreeBSD are concerned, I'll spoiler-tag them for anyone who cares enough to read them.

I've seen a lot of people compare FreeBSD to other DIY Linux distros like Arch, Gentoo, or even Slackware. However, I feel like those comparisons ultimately fall flat because there's always something that FreeBSD does better (or differently) that the aforementioned Linux distros simply can't or won't do. As far as the Arch comparison is concerned, the rolling release nature of Arch means that even core system utilities like the kernel, glibc, gcc, coreutils, etc get updated which poses a fundamental risk to the stability of your current installation if you're not paying close attention. Gentoo's Portage does a lot of shit better than FreeBSD ports does (especially as far as customisation options are concerned), but it suffers dramatically due to a general lack of prebuilt binaries outside of large packages like Firefox, LibreOffice, and so on (if I'm not mistaken). Also like Arch, Gentoo's rolling release nature means that all it'll take to bork your system is a core utility "upgrade" when you're not paying attention. Slackware is arguably the most "Unix-like" of all Linux distributions in that the core system utilities are well-tested and that they're not going to be upgraded unless there's a genuine need to, but this comes at the cost of your applications being treated the same way. Also, whereas Gentoo and Arch have excellent package management options at their disposal, Slackware simply doesn't have any of that by design. Granted, slapt-get and Slackbuilds help mitigate this somewhat but they're third-party additions and not something that comes with the base system.

FreeBSD on the other hand seems to occupy this happy medium where you trade off the bells, whistles, and chaos that Linux comes with for a remarkably stable platform with a surprising amount of options available to you. FreeBSD's releases are carefully engineered to not break anything between upgrades, which means that you don't run the risk of borking your system every time you run freebsd-update fetch and freebsd-update install. I personally haven't had the need to build anything from the ports tree because pkgng seems to take care of everything I need by default, but again: you still have the option to build things from source. As for the package manager itself, pkgng has come an incredibly long way since the 9.x days and is now just as robust as Linux equivalents like Pacman, APT, DNF, and so on. What I really love about pkgng is the fact that you have the option for either quarterly software updates or the latest software updates. I personally opted for the latest updates, so I modified /etc/pkg/FreeBSD.conf to reflect that. This effectively means that I'm running the latest verisons of stuff like Firefox, LibreOffice, KDE, Citra, and so much more while also maintaining a stable base system that isn't liable to break on me! This is the same shit that distros like CentOS Stream, openSUSE Tumbleweed, Debian Testing, and so on try to do, but still end up failing to accomplish on some level because there's still going to be a stray system utility that has to get upgraded alongside the rest of your installed applications.

As far as performance on the desktop is concerned, well... I'm not going to make any bones about it: FreeBSD requires a decent amount of work to get off the ground and running, plus there's going to be some hurdles that simply can't be overcome due to the fact that everything is becoming more Linux-centric rather than POSIX-compiant. With this in mind, FreeBSD has been able to elegantly handle everything I've thrown at it thus far. My emulators run without any issue whatsoever just like they would on Linux Mint, and there are no noticeable hiccups in performance as I'm trying to play stuff on Citra or PPSSPP. My Xbox controller doesn't work, but my PS4 controller does without any hassle after enabling the proper drivers and making the appropriate edits to the relevant text files. Browsing the internet is even easier now that there's no Flash/Java plugins to fiddle around with, and my GPU functions just as well under FreeBSD like it does under Linux. Mounting my internal drives on FreeBSD to copy files over is painless like I said earlier. I also haven't had any critical issues with my hardware, as it would seem that FreeBSD's adage of testing their shit before putting out a release applies here in 100% full force. FreeBSD might be better suited for server applications, but holy fucking shit: it makes for one hell of a decent desktop/workstation system as well
 
Is this the desktop thread?
That's a beautiful screenshot. Chicago really was the pinnacle of UI design. Everything since has been trash.
I see you are a man of culture.
the devs made a conscious effort to remove perl out of it's entire tooling
They wanted to stop shipping an outdated Perl in the base system just to run generate_ascii_dongs.pl once during buildworld so they rewrote it as generate_ascii_dongs.c and saved a bunch of bytes and build time. Good on them.

As for Perl the language, I don't hate it. Used it a lot in a previous job. It's a lot faster than Python. Most of "modern Perl" boils down to "don't use any of the following language misfeatures: [long list of shit]"
With linux namespaces and something like btrfs you can basically fork your running system without big overhead and try out something in that new namespace snapshot while your system just chugs along.
I wish apt did this by default. Snapshot the system, upgrade, reboot. Doesn't work? Reboot back into the snapshot and continue along.

Used to do this with RAID1. Break the mirror and upgrade one side of it.
 
  • Agree
Reactions: IamnottheNSA
FreeBSD gets a lot right. I also read the devs made a conscious effort to remove perl out of it's entire tooling. Alone for that they deserve mad props. Perl makes me sick to my ass.
Other people know more about this OS than I do, but the main thing I ever used it for was to turn an old box into a router/firewall largely according to instructions on some pre-stackoverflow site, and it stayed up for years without ever having to be touched again.
 
  • Like
Reactions: Pushing Up Tiddies
I generally keep pretty bare bones installations with few deviations from default install options, so I rarely have any trouble just nuking the partition and reinstalling for things like this, then spending a couple minutes changing the few things I bother with. Nvidia drivers are shit and I'm not going to spend a whole day twiddling with them. By comparison, the detection in the default install almost always hooks me up with the right stuff. Then adding CUDA is usually a snap, especially compared to the absolute nonsense you have to go through to install it in Windows.
I use them because a second monitor and suspend just doesn't work with noveau or intel drivers by some hardware quirk.
I added CUDA for some machine learning experiments but tbh the speedup isn't that significant on a laptop-grade GPU. Installing CUDA was possible because Ubuntu 20.04 was one of their supported distros. Not sure how the process looks on Windows but I don't like writing code on windows.

I didn't read all the pages back but to state the obvious: it's a lot easier to just make backups than try to jump through hoops of filesystems restoration and hard drive recovery (which I have had to do in the past because I was an idiot and skipped making backups). AFAIK you can make backups just by copying a bunch of files into a gzipped tar archive, though GNU tar doesn't really support incremental backups (it has some weird broken option that is sorta like incremental backups), and I wouldn't trust myself to make incremental backups anyway.

I had an idea for a "poor man's NAS" (not sure if I mentioned this already) that is just a RasPi connected to a hard drive, so it's not a NAS other than literally being networked-attached storage. If I ever became a true file hoarder, I would get a proper NAS and try to setup like FreeNAS and RAID or something on it.
 
I had an idea for a "poor man's NAS" (not sure if I mentioned this already) that is just a RasPi connected to a hard drive, so it's not a NAS other than literally being networked-attached storage. If I ever became a true file hoarder, I would get a proper NAS and try to setup like FreeNAS and RAID or something on it.
You could probably get a box for 4-6 drives and put a bunch of cheap Western Digital drives in it then RAID it however you like. I'd even cheap out and get 5400 rpm instead of 7200 because they don't really have to be fast.
 
You could probably get a box for 4-6 drives and put a bunch of cheap Western Digital drives in it then RAID it however you like. I'd even cheap out and get 5400 rpm instead of 7200 because they don't really have to be fast.
Yeah, consumer grade NASes I've seen are like $200-$300 as they are entire machines on their own. Good thing is storage keeps getting cheaper and the current going rate on Amazon for Seagate Ironwolf NAS drives is $100 per 4 TB (which a decade ago would've only gotten you like 0.5 TB). I guess a Raspi is not built to handle that kinda data IO but if I only do a backup every once in a while the speed shouldn't matter?
I was actually inspired when I saw an 8 bit guy video on his networked storage. I mean his data is more important because his video data is how he earns his livelihood but I liked the concept of not lugging around an external drive.
 
Yeah, consumer grade NASes I've seen are like $200-$300 as they are entire machines on their own. Good thing is storage keeps getting cheaper and the current going rate on Amazon for Seagate Ironwolf NAS drives is $100 per 4 TB (which a decade ago would've only gotten you like 0.5 TB). I guess a Raspi is not built to handle that kinda data IO but if I only do a backup every once in a while the speed shouldn't matter?
I was actually inspired when I saw an 8 bit guy video on his networked storage. I mean his data is more important because his video data is how he earns his livelihood but I liked the concept of not lugging around an external drive.
One of my backup methods(the one that's not 20 drives I've accumulated over the years in a giant case) is a Pi with 2 3.5" drives in USB cases. I back up to them(encrypted with Veracrypt) and then disconnect them and take them to my storage unit and bring the 2 that were there back. Every so often when I remember I do the swap.
 
  • Like
Reactions: awoo
One of my backup methods(the one that's not 20 drives I've accumulated over the years in a giant case) is a Pi with 2 3.5" drives in USB cases. I back up to them(encrypted with Veracrypt) and then disconnect them and take them to my storage unit and bring the 2 that were there back. Every so often when I remember I do the swap.
By storage unit do you mean outside of your home? At that point I would just use remote data storage (with encryption if you prefer).
 
By storage unit do you mean outside of your home? At that point I would just use remote data storage (with encryption if you prefer).
At the end of the day it's far cheaper. I'm already paying for a storage unit so storing 8TB costs me nothing incrementally and the drives were already purchased. Even S3 Glacier would be $28/mo. Sure Glacier Deep Archive would be even cheaper than that and even more a pain to use. Since it's off-site it's only really disaster storage, but if it did happen it's convenient to have it all available immediately.
 
At the end of the day it's far cheaper. I'm already paying for a storage unit so storing 8TB costs me nothing incrementally and the drives were already purchased. Even S3 Glacier would be $28/mo. Sure Glacier Deep Archive would be even cheaper than that and even more a pain to use. Since it's off-site it's only really disaster storage, but if it did happen it's convenient to have it all available immediately.
I wasn't talking about storage costs but the inconvenience of having to go out to a storage unit. But it makes sense in your situation if you already have them.
 
What's the primary cause of desktop linux shitting itself and dying? Is it the desktop environment? Every distro I've tried eventually implodes except for anything based on xfce (knock on wood)
 
What's the primary cause of desktop linux shitting itself and dying? Is it the desktop environment? Every distro I've tried eventually implodes except for anything based on xfce (knock on wood)
It could be a combination of things really. For me the usual things that pound the sphincter of my installs tends to be either a really shit desktop environment (I stress again for the love of anything pure in your life DO NOT USE ENLIGHMENT WINDOW MANAGER OUTSIDE OF A VIRTUAL MACHINE), or if your day is really bad then you may start getting superblock errors. Though superblock errors just means your hard drive is going back & you should probably just replace it. Make sure your important data is backed up.
 
  • Informative
Reactions: Bongocat
What's the primary cause of desktop linux shitting itself and dying? Is it the desktop environment? Every distro I've tried eventually implodes except for anything based on xfce (knock on wood)
What would count as an implosion? I've honestly never had issues with any distro that weren't my own doing. However, I'm a minimalism autist that never installed any DE heavier than XFCE or Cinnamon. There are a lot of moving parts in a regular desktop install of Linux, so it's hard to pinpoint anything without more details. Regressions in the kernel and drivers leading to worse performance is a thing I've experienced occasionally, this is why I keep a 4.* and a 5.* kernel. Distros with a stable release model have a lot of upkeep and some things might slip through. As far as I know, there are lots of custom builds, code patches and upstream fixes package maintainers have to juggle for that stability. Video drivers and their interoperability with various kernel versions is another source of fuckery, especially with NVIDIA. None of these things are going to break a system beyond repair though, everything is one rollback away from being fine.
 
Back