The Linux Thread - The Autist's OS of Choice

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
I can understand why they are doing this on the surface. The trend of using computers now, sadly, pivots toward smartphones and tablets. A lot of people don't even own a home PC or laptop anymore. The problem is:
  1. Sort your existing problems out first. The desktop still needs a lot of work and this mobile push is spreading the very limited resources of Linux UI devs thin. The inevitable result is consolidation and desktop getting the hamburger menu and flat garbage treatment.

  2. Linux proper isn't welcome on mobile. All the chinkshit features, constant blackjack and hookers treadmill of useless iterating and obsoleting, proprietary binary blob drivers for every component, closed bootloaders, etc. make installing one to begin with a pain in the ass. There's a reason most smartphone and tablet models are stuck using outdated kernels even with Android.

Linux peeps: Windows is bloated shit with stupid design decisions

Also Linux peeps: Let's do what Windows is doing but made of duct tape.

I'm trying to get away from windows because of this tabletization shit.
Linux userspace devs sometimes remind me of autistics desperately trying to fit in. They hear all the cool kids talking about how great this new design is and the next day they bring their own mock-up of that to show off. Only it's barely holding shape and a Cambodian sweatshop kid could've done a better job, so they get laughed at.
 
If you need a good "basic" distro that's rock solid, I'd try out xubuntu. It's fairly primitive but man, I don't think I've ever had a single thing go wrong (knock on wood again).
I've never had any problem with Mint, although it's admittedly pretty bloated. It's probably not suited for if you're trying to keep using an end-of-life computer. I've had good luck with Peppermint Linux OS for older machines. Although there are even more lightweight distros, it has a reasonable interface that would be relatively familiar to Windows users.

It was the one thing that worked reasonably well on my old Compaq CQ60 which was a piece of shit even when I got it originally.
 
What about Puppy Linux? It's apparently something that can be loaded from RAM. Or could. Haven't caught up these days.
 
What about Puppy Linux? It's apparently something that can be loaded from RAM. Or could. Haven't caught up these days.
I didn't really like the interface. It's one of the recommended ones for really, really old computers, though. If I had something from 2000 I wanted to keep using, I'd probably start with it.

Anyway it installed just fine and worked.
 
  • Informative
Reactions: NumberingYourState
My only Linux experience is using an Ubuntu USB to rescue files from a self-destruct episode of the Windows 10 PC I set up for the boomers in the family.

Which distro is right for me?
Mint Cinnamon or XFCE is what I usually recommend and I know actual boomers who use those with no issue. Mint isn't lightweight relative to other Linux distros but I doubt you care about that.
I like XFCE so tend to use Xubuntu for a lighter distro that works well but I've never used it as my daily driver, it's just been on laptops and used as needed.

Ubuntu may be worth exploring but the UI is initially very different from what you're probably used to as you probably saw before. Beyond that it's fine.

People hate on Manjaro and it can have its issues but it is probably the easiest arch distro to use and has a lot of desktop environments to choose from. I know the official KDE and XFCE releases are intuitive to use for people used to Windows.

Personally, I find KDE stuff to be too much. It can look awesome and they have a lot of great in-house tools, but I've had my share of issues using KDE oriented distros over the years so I avoid it now. KDE Plasma is good for showing off to normies what linux can do graphically but I don't care about that because I don't use most of those features anyway.

It's easy to move to another distro if you find your starting distro to be lacking so don't feel locked in after install.
Make a few virtual machines to explore with.


What about Puppy Linux? It's apparently something that can be loaded from RAM. Or could. Haven't caught up these days.
Replying since there are people looking for distros that read the thread.

I've played with recent versions of Puppy Linux, mostly Fossapup and Bionic Pup and they can be run from memory easily on old hardware but have their own quirks to be aware of for anything beyond clicking around. Not great for someone new to linux who is trying to do practical learning, but maybe for anyone else it'd be fine. I think Puppy Linux is the most lightweight distro I've played with that was actually useful to someone who just wants do basic stuff.
It can be installed to disk too but you have to do the partitioning yourself if I remember right.
 
Last edited:
Anyone migrated to CentOS Stream yet and have a trip report? It's looking like either that for us or bite the bullet and buy RHEL.
Nothing I run is mission critical by any means but I have had no issues with CentOS Stream. I would hold off on 9 for a while if you need EPEL packages though. I figure that if CentOS Stream is good enough for CERN it's good enough for me. I don't see this situation with Alma and Rocky being two separate orgs duplicating work with less backing than the original CentOS effort as being long-term sustainable.
 
Why does anybody use Arch btw. I never hear them doing anything but complaining about how much their computer doesn't work. Followed by how proud they are of them fixing their computer.
Because troons are masochistic narcissists and Arch is a perfect fit for that nature.

Also thankfully KDE 5 "Plasma" isn't as bloated as it used to be. GNOME takes that spot now, being the worthless monopolising cunts that they are (they can quit assuming Adwaita with the default settings and everything is the only theme that exists too).

Xfce just tends to work and the developers update things as they need to without changing too much. Their philosophy is above and beyond chadly.
 
Last edited:
I haven't seen a tablet UI on the desktop wince windows 8. I thought they learned their lesson on that.
Well, take a look at Gnome from version 4 onwards. I looks like a Tablet, and both Fedora and Ubuntu are thought with that ... thing, in mind. KDE was the DE of choice to me mostly because it didn't looked like a Mac or Tablet UI.
I understand that I can use some plugins to fix the behaviour, but it's so tedious, especially now that most fixes don't work.
Linux peeps: Windows is bloated shit with stupid design decisions

Also Linux peeps: Let's do what Windows is doing but made of duct tape.

I'm trying to get away from windows because of this tabletization shit.
My thoughts exactly. It reminds me a bit the tale of the fox and the grapes, trying hard to become Windows, while also mocking it and following all the bad examples.
Because troons are masochistic narcissists and Arch is a perfect fit for that nature
not only is a good description, but it also ties in perfectly with a certain fake suicidal lgbt narcisisstic and furry emulator developer
 
Last edited:
Nothing I run is mission critical by any means but I have had no issues with CentOS Stream. I would hold off on 9 for a while if you need EPEL packages though. I figure that if CentOS Stream is good enough for CERN it's good enough for me. I don't see this situation with Alma and Rocky being two separate orgs duplicating work with less backing than the original CentOS effort as being long-term sustainable.

A lot of enterprise IT departments have a policy of testing the bejeebers out of any OS upgrade before deploying it. But this is the scale where if you get burned once by an OS bug that takes out the system for a couple days, you burn millions and millions of dollars. Stream's release model is a dealbreaker for them. Realistically, it's highly unlikely there will be any problems, but that's not good enough for some of these types. My guess is that only one of Rocky or Alma will end up being the heir to CentOS, but for now, it looks to be a somewhat even split.
 
Why does anybody use Arch btw. I never hear them doing anything but complaining about how much their computer doesn't work. Followed by how proud they are of them fixing their computer.
Using Arch will teach you a lot about Linux and how computers work. Some people want to learn this and others just want their computer to work. If you just want your computer to work, it's a waste of time. I am somewhere in between where I have worked with a bit of Linux syscalls stuff for fun but I can't find the willpower to deal with hardware. If you like to learn, you have a frustrating but richly rewarding path ahead of you. There is a tremendous amount of knowledge on the ArchWiki. And this 1500 page book.

TLPI-copy-1.png
 
A lot of enterprise IT departments have a policy of testing the bejeebers out of any OS upgrade before deploying it. But this is the scale where if you get burned once by an OS bug that takes out the system for a couple days, you burn millions and millions of dollars. Stream's release model is a dealbreaker for them. Realistically, it's highly unlikely there will be any problems, but that's not good enough for some of these types. My guess is that only one of Rocky or Alma will end up being the heir to CentOS, but for now, it looks to be a somewhat even split.
If you get burned by a CentOS bug right now you have to wait for a fix to make it through RHEL and then through a separate release process into CentOS. Given the RHEL gating it shouldn't happen often, but when it does, the value of an SLA will become clear. That's why I don't have a whole lot of sympathy for supposed enterprise users who were relying on a free and community-supported distro for mission-critical applications. And I will have still less sympathy when they switch to Rocky or Alma and then have to scramble again when those distros are merged or discontinued.
 
  • Like
Reactions: Aidan
Using Arch will teach you a lot about Linux and how computers work. Some people want to learn this and others just want their computer to work. If you just want your computer to work, it's a waste of time. I am somewhere in between where I have worked with a bit of Linux syscalls stuff for fun but I can't find the willpower to deal with hardware. If you like to learn, you have a frustrating but richly rewarding path ahead of you. There is a tremendous amount of knowledge on the ArchWiki. And this 1500 page book.

TLPI-copy-1.png
I'm in between as well. I always like to better understand the systems I'm using, but there's no way I'm derailing a work project for it. Also thank you for the book recommendation. I'll check into that.
 
Last edited:
Also for anyone with a decent amount of ram and more than one gpu (laptops included if they're switchable graphics) if you're feeling daring it's worth looking into libvirt and qemu with gpu passthrough if you hate windows but need to run windows stuff with hardware accel. Takes maybe an hour or two to set up plus maybe a couple more to get passthrough going, but it's very satisfying having all the productivity apps and steam/epic/itch confined to their own tiny windows installs that I just fire up when I need them, Performance is really good too, guides saying you only take a 3-4% gpu perf hit seem to be pretty dead on. The whole setup in general feels like the first time in years I've been able to do everything I need on one machine.
So I can run Photoshop, Paint.NET, Sony Vegas, old obscure games and software, etc. on an emulator with minimal performance loss? Please link a guide on the subject. This could be what I need to yeet my Windows partition into oblivion forever.
 
If you get burned by a CentOS bug right now you have to wait for a fix to make it through RHEL and then through a separate release process into CentOS.

Since CentOS pre-Stream trails RHEL, the issue (isn't always a bug!) is usually identified in RHEL before CentOS, which enables these users to simply skip that release and not even bother running it through their own regression tests.

The users we are talking about run every single software update through a battery of their own in-house regression tests before adoption. This policy exists because they've gotten burned to the tune of millions and millions of dollars by updates before, and they do everything they can to prevent that from happening again. Since you can't spend 3 months testing each update of a continuous-update model, if any operating system or application adopts that model, they either find a replacement, or make the supplier adopt a model that fits their IT policy.

Given the RHEL gating it shouldn't happen often, but when it does, the value of an SLA will become clear. That's why I don't have a whole lot of sympathy for supposed enterprise users who were relying on a free and community-supported distro for mission-critical applications. And I will have still less sympathy when they switch to Rocky or Alma and then have to scramble again when those distros are merged or discontinued.

Nobody needs your sympathy. They need a performant enterprise Linux that isn't on a continuous update model. As of now, CentOS doesn't fit the bill, so as these users are looking to upgrade from CentOS 7, they're looking at RHEL, Alma, Rocky, SLES, and openSUSE as potential replacements.
 
Last edited:
So I can run Photoshop, Paint.NET, Sony Vegas, old obscure games and software, etc. on an emulator with minimal performance loss? Please link a guide on the subject. This could be what I need to yeet my Windows partition into oblivion forever.

Here's one that covers the basics. From what I've seen the only visible difference between setting it up on different architectures is the names of the qemu packages. Googling '<distro> kvm gpu passthrough' will turn up a ton of guides for particular distros.

Two things to note:

First, Nvidia used to disallow gpu passthrough but now they do allow it so a lot of guides will have hokey shit involving hiding the virtual machine status from the gpu. You can ignore these. Also, literally everything that can go wrong with gpu passthrough will show up in the vm as driver error 43. This is the same error Nvidia cards would throw before gpu passthrough was allowed when they detected a vm, so don't do what I did and go apeshit trying to figure out why the hiding wasn't working. The solution is probably removing the software video device from the vm so the only gpu it can see is the passthrough one.

Second, a lot of guides have a ton of shit for remapping devices when qemu starts so you can do shit like share a single gpu laptop's built in display with the vm. Don't bother with any of this unless you absolutely have to.

Once you've got windows installed look up looking glass, which is nearly lagless frame copying from the vm to Linux that lets you interact with the vm in a window like it was remote desktop with near zero overhead.

Also it's probably worth grabbing one of these if you're on a laptop or don't have a dedicated second monitor for the vm, since you do need to have some type of output connected for the vm to boot once it's on the passthrough.


E: The one downside I've seen to this setup is mapping storage can be obnoxious. Letter mapped samba shares will cause problems with things that hold file handles open (eg : any game running on the Unity engine, Substance Painter, some VSTS, etc) save yourself the hassle and passthrough an external usb drive if you want a place to install a billion games.
 
Last edited:
Don't worry about GPU passthrough first - worry about whether anything will run in Wine first. https://appdb.winehq.org/ will give you an idea.
Games have a big community of people working on compatability such as https://lutris.net/ and winetricks stuff. But if you really need these software I wouldn't give up the windows partition yet.
 
Definitely look into Wine if it's just two or three apps you need. Bonus is that you can usually just copy your wineprefix to other machines and installed apps will just work, even with shit like the Adobe suite that's autistic about licensing.

There's also the option of just straight-up converting your Windows partition to your VM, which is what I did. Grab the Macrium Reflect trial, pass the HDD with the windows partition and an empty virtual disk (or external or wherever your win install is gonna live) to the VM and then boot it with the Macrium iso. Clone it across to the new drive, do 'redeploy to new hardware' in macrium to strip out all of the non-stock drivers and hardware-tied registry keys. Done.

I should probably throw together another github guide on this because a lot of the ones I've looked at don't point out gotchas or small problems.
 
Last edited:
What's the point of having Windows in a VM? If you're running Windows you might as well have it on a partition and skip any overhead.
 
What's the point of having Windows in a VM? If you're running Windows you might as well have it on a partition and skip any overhead.
I have a Windows 7 VM that I seldom use when helping others or if I need to quickly use something Windows-only real quick. For frequent use it'd be too cumbersome and retarded, though.
 
  • Like
Reactions: 419
Back