The Linux Thread - The Autist's OS of Choice

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Why does a display manager not manage displays, but shows login screens?
For the display manager thing. To me its because you manage which display server session is going to be launched. At least that's been my interpretation.
This is because of the superior, network-transparent nature of X Windows over the shitty wayland.

With X, GUI applications act as clients to an X server. In the typical home user environment, that X11 server is running on the same machine as the client applications. A user who is running X locally can either:
  1. use a script like 'startx', usually from an existing logged in console session, to start up the X server and (typically) a window manager/desktop environment, or perhaps a single arbitary app like an xterm without using a window manager
  2. use a display manager which runs a local X server and handles authenticating the user to log in
Now, X natively supports more complex scenarios, because it's well-designed. For example, if you run graphical applications with Windows Subsystem for Linux with them being displayed on your Windows desktop via the vcxsrv or XMing X servers, technically they are running as clients 'across the network' from the 'Linux' IP on the virtual network, to the 'Windows' IP on that same network, with scripts running on the linux side setting the 'DISPLAY' and other variables.

In a more manual scenario, let's say there's a configuration GUI that you want to use to configure a PC on your network, but that PC doesn't have a monitor or even a display card. Well, if you set up your locally running X server to allow connections from the remote machine, sort out any firewall rules to allow the incoming traffic, then connect to the remote machine via SSH or telnet, set the DISPLAY variable to point at your locally running X server, and you can launch that application and have it display on your local X server.

What about a more complex scenario than those, where rather than having a virtual network on a single computer like the WSL case, or a static setup like the manual configuration to run a single app over the network, you have a bunch of thin client 'X terminals' on a corporate or educational network which you want to be able to login to one or more big (or not) servers which will run the actual applications? Well, proper display managers like XDM or GDM can have something called XDMCP turned on.
  1. The display manager then acts as an XDMCP server
  2. A thin client on the network, or a regular computer, can then either be configured to connect to that XDMCP server, or do a broadcast to find any available XDMCP servers (perhaps you're doing some load balancing)
  3. If the XDMCP client is authorized to connect by the XDMCP server, the XDMCP server can then serve up the login interface and everything that eventuates from logging into the system- window managers/desktop environments/other applications/screensavers etc etc etc- to the X server running on the XDMCP client.
In such a situation, a XDMCP server/display manager is usually 'managing' what is presented to many 'displays'/thin clients/X terminals across a network.

This admittedly is not entirely at all secure if there are malicious actors (like employees or students) on your network in a position to sniff network traffic (so use a switch instead of a hub, duh) or over the internet.
 

A vtuber, and a bluefin guy talking about how they are making the true Linux for niggers.

To me it's kind of crazy that people think just a normal Linux system with access to the whole system is that hard to use. If you know enough to set things up. In my experience things don't tend to just break unless you break them.

Doing everything yourself isn't a noob kind of task, but fortunately the other linux for niggers distros will do a lot of the work for you. And still let you have a normal file structure.

And for package management. Its not like Linux does it in a super unique way. Apple lets you use a package manager, as well as windows. Just most people don't know about it or use it.
 
I agree that Linux is a lot easier to use than is generally rumored. I also think the main thing that keeps windows on people’s computers besides inertia is backwards compatibility. MS puts a lot of effort into making sure that .exe files from 2004 run properly on the latest Win11 build in 2024. People have a lot of random programs that they have accumulated over the years that they rely on for their workflow.

Maybe if .exe was elevated above a second class citizen on Linux, this could start to actually move the needle. People do not want to worry about explicitly using WINE, installing winetricks, etc. You might already be able to ship a distribution with this fairly easily using binfmt shenanigans.
 
Does anybody know anything about WinBTRFS? I'm planning on making the full move from dualbooting Windows and Linux to Linux some time before Win 10 support ends and I heard that you can format a drive to be BTRFS on Linux and have Windows read it.

I'm just interested because there aren't really any good alternatives to sharing a Steam game drive between the two OSes.
 
I agree that Linux is a lot easier to use than is generally rumored. I also think the main thing that keeps windows on people’s computers besides inertia is backwards compatibility.
The simple fact is that people will use whatever you put in front of them. If you preinstall a browser, they're going to use that browser. If it uses Google to search by default, they're gonna use Google to search. If a laptop comes with Windows, than you can bet that people are going to use Windows.
MS puts a lot of effort into making sure that .exe files from 2004 run properly on the latest Win11 build in 2024. People have a lot of random programs that they have accumulated over the years that they rely on for their workflow.
And yet Microsoft sees fit to do shit like change the context menu to show you fewer items for no good fucking reason.
 
The simple fact is that people will use whatever you put in front of them. If you preinstall a browser, they're going to use that browser. If it uses Google to search by default, they're gonna use Google to search. If a laptop comes with Windows, than you can bet that people are going to use Windows.
How many years have Linux cultists been coping about the lack of desktop Linux adoption with this argument? Must be nearing 30 by now
 
How many years have Linux cultists been coping about the lack of desktop Linux adoption with this argument? Must be nearing 30 by now
It's not just Linux, my guy. It's a lot of things. Firefox gets a lot of money from Google. Why? Because Google is the default search engine on Firefox.
 
  • Late
Reactions: Unicorn Fairie
I notice that Gentoo permits a broad range of behaviour with regards to USE flags. Curious what approaches KF Gentoo heads have to USE flags.

I've been doing the naive approach where any USE flag by any package that is used more than once goes in my make.conf USE, and packages that breaks get flags disabled on a package-by-package basis in package.use. I think this is causing long dependency resolutions, but NBD.
 
Using an OpenRC system for the first time, and this is shit. Why do people like this‽ SystemD's management of programs is much better. Centralization is superior, even if it's *supposedly* not as efficient (never heard a convincing argument for this).
This, as in what? If you don't like OpenRC, there's runit (my pick), s6. SysVinit if you want that for some reason. Good luck switching if you don't like how SystemD does some shit and you're already stuck in a distro with Poettering lock-in. Truly superior centralization. You also have it backwards, I'd say. Efficiency is one of the main reasons people give for centralizing everything. Rarely a good thing.
 
I used to use runit with gentoo, then I left temporarily to Alpine and became aquired with busybox' version of sysvinit and that was also good enough for me tbh. What's the problem with people who can't into AUTOEXEC.BAT? I never, ever felt for me as normal user that init somehow wasn't complicated enough and needed things like depdency resolution. They all kinda solved problems I personally never had. If your daemon loses it's shit because the network connection isn't up yet then fix it. The init system isn't the right place to solve this.
 
I notice that Gentoo permits a broad range of behaviour with regards to USE flags. Curious what approaches KF Gentoo heads have to USE flags.

I've been doing the naive approach where any USE flag by any package that is used more than once goes in my make.conf USE, and packages that breaks get flags disabled on a package-by-package basis in package.use. I think this is causing long dependency resolutions, but NBD.
The way I've done things with customizing use flags. Is start with the normal openrc desktop profile. Added things in the beginning I knew I wanted or didn't want. Like X or -systemd etc. Then over time added a few things globally in make.conf I realized I wanted.

Another thing is when installing new packages or when I update my system. I look at the output and see, if there are use flags enabled/disabled I might want to change. Run equery u pacakgename see if I find anything there, then add those in the package.use directory.

I've been doing some stuff lately, mostly around trying to optimize my binaries more for performance, now that I feel I have a decent enough grasp on use flags. That seems like the next step to really getting a nicely customized system. So I've changed O2 to O3, as well as built gcc with graphite capabilities, and and added a few flags to use that when compiling. (Doing this will cause some failed builds so making a back up to override lto when needed is necessary)

As well as disabling multilib. Since I really don't need it for what I do. I've tried the no multilib profile, and I wasn't happy with it because I want the stuff from the desktop profile. So now I've opted to go into the amd64 openrc desktop profile, and change the configuration to disable multilib. While having the other stuff still.

The reason for disabling that. Is it cuts down compile time a good bit. Like gcc. I don't remember the exact time. But I want to say after changing things around it's like 30 minutes to compile. Or the kernel iirc after changing things now takes about 22 minutes. And this is on an 8 thread laptop processor.

Oh. And idk if it's the profile or what. But it seems like after changing to the nomultilib one. It won't use binpkgs at all. I haven't looked into if the profile is the reason or if it's something else I did that is making me not emerge binaries. If it's the profile thing, that's another reason changing the desktop one is better than going with the actual nomultilib profile.
 
But it seems like after changing to the nomultilib one. It won't use binpkgs at all.
The main thing that changing a profile does is changing the default USE flags. It shouldn't stop you using binpkgs entirely, I think it's something else.

So I've changed O2 to O3
Don't do this. It's not worth it. -O3 includes optimizations that might make things worse, or may be outright broken. Of all the thousands of packages on your system it's going to break at least one, and ruin your day sooner or later.
 
-O3 isn't as broken as it once was but there's little point and it can actually end up being slower sometimes. For "gotta go fast" package breakage you go with -Ofast now. With constrained systems, march and mtune together with -O2 will be the stuff that does the most, I'm playing around with ARMs right now and notice that with their small Cortex CPUs. On non-constrained systems you'll probably struggle to feel a difference, though. On some platforms that time has forgotten Os is actually what you wanted to pick. Now, e.g. experimenting which individual packages can be built with pgo, lto and graphite, that's where the master level of the gentoo ricer is and that can be done with portage env. Sometimes it also makes sense to override your system-wide CFLAGS for specific packages for unsafe optimization flags like e.g. -ffast-math (which is included in -Ofast) if you know that it doesn't break the package (C:biggrin:DA for example profits greatly from --fast-math on ancient Cortex cores IIRC). You see, that's why I like playing with these small Cortexes, always fun to get the most out of them. What's the point in doing all that on some Ryzen. Shit just works.

You can set other individual build variables for individual packages via portage/env. That's useful for example if you put PORTAGE_TMPDIR into a tmpfs for build-time speed gains but need to override that for some big packages that don't fit into your RAM + compile-time overhead.

I actually real enjoyed the impossible lightness of Alpine a lot more than I enjoy Gentoo after coming back to it but eventually you'll run into something Alpine doesn't have and will have to make your own build (if not just wild compiling into some directory, which we all did at some point so don't look at me like that) and Gentoo ebuilds are more effortless. If they're more effortless because I used them for over a decade (EDIT: Actually it's two decades now. Mama mia!) at this point or because they're better than what Alpine has I couldn't tell you. Either way, if you start building packages, you start ending up with all the build-time dependencies on your system anyways. What does it really matter if it's some container or your actually running OS. Not like it changes anything.
 
Last edited:
to the one above me. I am also messing with graphite and other lto stuff also. IDK how much I will keep of all of it. For now I'm just experimenting with the stuff. I can't say I have seen any packages that have broken from it. At the same time its also hard to tell if the packages have improved either. It is tempting to take one of my installs and do emerge -e --keepgoing. and see where that ends up. The flags I've messed with that involve graphite are pretty minimal so far. I will probably change them over time, see what actually might make any difference.

The main drawback I have actually seen is some packages will just fail to build with lto. with o3, usually at least now, if something fails to build with that usually it will just force o2. for either o2 or lto, when I run into ones that don't build I just disable them for that package.

The main thing that changing a profile does is changing the default USE flags. It shouldn't stop you using binpkgs entirely, I think it's something else.


Don't do this. It's not worth it. -O3 includes optimizations that might make things worse, or may be outright broken. Of all the thousands of packages on your system it's going to break at least one, and ruin your day sooner or later.
Yeah, I know the profile shouldn't disable binpkgs. At least that I know of, but so far that has been the only thing different between my two installs. One I changed to nomultilib, the other I altered the desktop profile. I'm sure eventually I will figure out what's going on with that. So far I haven't seen anything that really sticks out though.
 
Does anybody know anything about WinBTRFS? I'm planning on making the full move from dualbooting Windows and Linux to Linux some time before Win 10 support ends and I heard that you can format a drive to be BTRFS on Linux and have Windows read it.

I'm just interested because there aren't really any good alternatives to sharing a Steam game drive between the two OSes.
It's shit. Really unstable because explorer.exe has crashed, Now you gotta boot into linux and run repairs on it. (otherwise is okay), if you use it for just vydia or something and kill explorer I think it could work. Sometime random bonus FS corruption issues will just kill your install if you use it as a primary system disk (don't).
 
Well, I managed to make it a full year without distro jumping, LMDE, still just werks.
Kind of helps that every other distro looks to be in an embarrassing state these days, I would probably be coping on LTSC still if Mint didn't exist.
 
Well, I managed to make it a full year without distro jumping, LMDE, still just werks.
Kind of helps that every other distro looks to be in an embarrassing state these days, I would probably be coping on LTSC still if Mint didn't exist.
LMDE is the best. The only flaw I could find is that the Debian kernel doesn't have the latest drivers for things like the Intel Arc GPUs, but that won't be a dealbreaker for most people.
 
Does anybody know anything about WinBTRFS? I'm planning on making the full move from dualbooting Windows and Linux to Linux some time before Win 10 support ends and I heard that you can format a drive to be BTRFS on Linux and have Windows read it.

I'm just interested because there aren't really any good alternatives to sharing a Steam game drive between the two OSes.

BTRFS is shit, so I can only imagine the windows driver is worse
 
Back