The Linux Thread - The Autist's OS of Choice

Today I attempted to enable Secure Boot on my laptop... I managed to wipe my boot partition and (most importantly) my TPM keys. My laptop wouldn't boot til I gave the magic combo.

In the end it worked out better than expected, figured out how to setup SHIM. Good to learn something new!
Breaking your system (or having it randomly break on an update) will always teach you something.

Trial by fire.
 
I always just gave the -j flag the number of threads on the CPU +1. Now, with something like a 32 core Threadripper I'm not sure if j=65 would even show a benefit.
Even if there are enough files before it has to do some linking you're likely to hit IO or other limits too. And as I recall the parallelism from Make is pretty dumb. I think CMake projects can be smarter. Obviously the way is to just recompile with higher and higher parallelism and see what the graph of compile time looks like.
 
Even if there are enough files before it has to do some linking you're likely to hit IO or other limits too. And as I recall the parallelism from Make is pretty dumb. I think CMake projects can be smarter. Obviously the way is to just recompile with higher and higher parallelism and see what the graph of compile time looks like.
I got to admit, I haven't recompiled a kernel since about 2007 or so, and it wasn't that necessary by that point. It wasn't like specifically building for an Athlon 64 actually gave you noticeably better performance over generic 686.

It seems crazy to hear about it taking hours to compile at this point, even though they keep dropping out support for (potentially still very useful) older hardware (though I'm glad the 486 is still supported).

Is there any sort of Gentoo option where the kernel source (and other 'big' stuff) gets uncompressed to a ram disk before compilation? It might not make sense to spin up a new 1gb ram disk for compilation if things are already decompressed, but it seems like a no-brainer if you're doing a straight source build from a raw bz2 archive.
 
I always just gave the -j flag the number of threads on the CPU +1. Now, with something like a 32 core Threadripper I'm not sure if j=65 would even show a benefit.
Yeah that's a good general guideline. I also add -l [number of logical cores] seems to work well with making compile jobs in the background not noticeable. There's also still the possibility to give the compile job lower priority to make sure interactive tasks come out on top but I haven't really felt a serious impact in interactivity because of system load in a while to be entirely honest. Since autogrouping it kinda doesn't seem to be a thing anymore.

There's an inherent point of diminishing returns with parallelization of tasks like this, especially since some things have to happen in a specific sequence. It then doesn't matter if you have 2 cores or 200 cores, if A has to happen before B and a singular core is slow in processing A, then 199 cores will just have to wait. Also each additional compile job takes some resources (esp. RAM) and depending on what the machine is doing otherwise, that can actually make things slower, ovehead-wise and especially since Linux still doesn't handle low memory situations gracefully, although there are some out-of-tree kernel patches that improve that bit. (I'm not talking about OOM situations, they're still downright catastrophic for the average linux system without outside help by daemons)

I gotta admit I didn't watch the video and maybe I'm just misunderstanding things but compiling a Linux kernel for a given machine with just the options that machine needs compiled in doesn't and shouldn't take hours. On a reasonably modern processor I'd talk about minutes. (Out of curiosity, I actually compiled my kernel new from scratch in the background on my six core mid-range Zen 2 System while typing the post, it took 4 minutes and 33 seconds complete with building the initramfs image, the majority of that time went to compile amdgpu stuff it seems)

The slowest system I have an active gentoo install on is an Allwinner A20 (2x Cortex-A7) with 2 GB of RAM. Speedwise if I have to wager a very wild guess it's somewhere in the area of a Pentium 3, maybe - a comparison across architecture borders in actual, non-synthetic performance is hard to make. Perfectly feasible although of course it doesn't build packages like firefox or has a high CPU demand nominally. I don't bother with distcc or building binary packages on a faster system (although it's possible across architecture borders) it really does everything by itself. If it actually makes sense, well...

The linux kernel build process usually uses /tmp for temporary files and there's nothing speaking against putting /tmp on a tmpfs since there's no inherent guarantee for files in /tmp and they don't need to survive a reboot. A program that relies on anything in /tmp not being, well temporary, is broken.

TPM is such a cool concept in theory (as long as it isn't buggy or exploitable) so many things you can do with it beyond encrypting the HDD and 2.0 is what Win11 insists the system on having and all that drama about "Win11 compatibility" really was about IIRC. (or at least having TPM at all? Actually not sure) Also so many possibilities to lock yourself out of things forever. Truly the forbidden fruit. (also yeah, securitywise it might be backdoored but so might be the rest of your system so whatever really)
 
Last edited:
Sub 5 minutes to compile a kernel sounds about right. When building a kernel, some people forget to turn on the compiler flags so it's just going with one thread. At most, on relatively modern hardware, that should be 30 minutes or so max. When I started using source based Linux, I used a 1ghz Duron and 500 MB of ram. Even on that system it didn't take hours to build a kernel. It did take over a day to build OpenOffice though, that was always the worst thing to see pop-up with an emerge sync.
 
Sub 5 minutes to compile a kernel sounds about right. When building a kernel, some people forget to turn on the compiler flags so it's just going with one thread. At most, on relatively modern hardware, that should be 30 minutes or so max. When I started using source based Linux, I used a 1ghz Duron and 500 MB of ram. Even on that system it didn't take hours to build a kernel. It did take over a day to build OpenOffice though, that was always the worst thing to see pop-up with an emerge sync.
Running make or ninja with the -j slashes the time for a localmodconfig build plus some virtualization modules (ideal for custom kernels), though currently most of the time is spent with the AMDGPU driver. As for LibreOffice the build takes a lot of fricking time doing I/O (easily 10k of XSLT files to be transformed).
 
Seethe.jpeg
 
At most, on relatively modern hardware, that should be 30 minutes or so max.
Tried out of curiosity, 25 mins, 58 secs - which interestingly shows that the performance improvement of using all physical cores ( 4.5 * 6 = 27) is fairly linear. Using all logical cores (hyperthreading) apparently doesn't improve things much, though.

The kernel has grown an awful lot in the last ~20 years and the default configuration doesn't only really cover all the sensible defaults, but also pulls in some really obscure shit. I haven't compiled a default kernel in a long time but I can imagine it could take a while, but yeah, not hours, not on a modern system - unless maybe you really take only one weak core. My kernel configuration is very light and really only pulls things the system actually has or I need.
 
I might be out of the loop, but didn’t Canonical remove the Amazon stuff from Ubuntu?
That Amazon icon is a .desktop file which opens an Amazon page in Firefox, nothing more. Haven't touched the thing in years but IIRC the only thing shady thing they do in a desktop install is optable telemetry, though in Debian it's off by default and they ask you at install time.
 
  • Like
Reactions: Knight of the Rope
Urgh. Gotta love mystery hardware problems. Work laptop on manjaro suddenly decided my Thunderbolt port shouldn't work anymore. My work setup TB3 hub doesn't work, displayport alt mode seems to not work either but the port works fine as a standard usbc port. Absolutely nothing relevant in journalctl or dmesg.

No recent package changes, tried several different kernels including a patched one for a pcie bug someone on a list was having that caused the same symptoms on their machine. Such bullshit.
 
I swear gaming on Linux is such a mixed bag. On my Windows partition a lot now since a few games that I play regularly decided to update around the same time. Hopefully a new proton update will fix the issues Also, it also baffles me how shit the Linux version of Left 4 Dead 2 is compared to Windows. Mostly minor things, but performance is somehow worse. Minor issue issues include:
Radial menu is wonky
Unable to select sprays or move through folders in-game (this might be permission issues and less of the game itself)
Addons menu has a visual bug
Transparency for sprays in-game don't work

Was just getting comfy with Debian to.
 
  • Like
Reactions: Dr. Geronimo
I swear gaming on Linux is such a mixed bag. On my Windows partition a lot now since a few games that I play regularly decided to update around the same time. Hopefully a new proton update will fix the issues Also, it also baffles me how shit the Linux version of Left 4 Dead 2 is compared to Windows. Mostly minor things, but performance is somehow worse. Minor issue issues include:
Radial menu is wonky
Unable to select sprays or move through folders in-game (this might be permission issues and less of the game itself)
Addons menu has a visual bug
Transparency for sprays in-game don't work

Was just getting comfy with Debian to.
Transparency is a general source problem with Linux, happens in other source games too
 
Work laptop on manjaro suddenly decided my Thunderbolt port shouldn't work anymore.
a lot of these fast ports actually sit on pcie, so need the appropriate bus parts compiled into the kernel. Others deactivate (parts of) these ports for powersaving through custom GPIO interfaces. I'd google the exact laptop and see if that is a common problem with that one. A hint that that could be the culprit could be the port working directly after a cold boot, and disappearing after sleep. (S3 can be tricky with linux because firmware of such devices doing stuff the kernel has no idea about)

If that is a possible cause, unbinding the drivers before sleep and rebinding at wake up could/should solve the issue.
 
Yeah, that's the route I was looking into with the kernel patch. Doesn't help that the hub itself is the only thunderbolt device I have access to and it's a notorious piece of shit (dell TB16) with all sorts of issues, but I'm really thinking it's gotta be something with the port itself and possible power management since the little C to hdmi/A dongle I had lying around detects things plugged into the A port but not a monitor on the HDMI one. I'm going to dig around in /sys today and see if I can force the power on somehow.

E: This is infuriating. I can't force power on it and furthermore a coworker loaned me a DisplayLink dock to see if that'd work. I know DisplayLink is a crapshoot at best but xrandr doesn't even detect the output it even though it's one of the few DL dock models known to work semi-reliably on linux. What the fuck? It seems like this laptop has decided that the internal display is the only one that exists, period.
 
Last edited:
  • Feels
Reactions: Total Annihilation
Back