The Linux Thread - The Autist's OS of Choice

Ok, so, Nvidia drivers have bricked my main system
The system is completely bricked? As in, irrecoverable? If not, I've had good experiences with kmod versions of NVIDIA drivers. The difference between this and a regular driver package is there's a stub compiled as a kernel module that then pull in all the proprietary shit. Lessens the chances of a breakage somewhat.
 
Ok, so, Nvidia drivers have bricked my main system and since there's jackshit I can do about it other than crying I've decided to answer @Seasoned News Reporter regarding my opinions on IceWM as I needed to re-install my test server and I didn't feel like installing Fluxbox or Openbox:
I have my doubts it's "bricked" and unrecoverable. Its a pain in the ass but get a live iso and put it on a usb. Boot it, mount your actual install partitions to folders then chroot into the root one. Roll back to a Nvidia driver that actually works using your package manager or just shift to Nouveau. I can't really comment on anything particular beyond that since I don't know what distro you're using and how its setup. You could also try setting nomodeset in your linux kernel during boot to see if it gets you into your install.
 
What is your GPU? If you have an ancient nVidia GPU the standard nvidia-driver packages won't work.
The system is completely bricked? As in, irrecoverable? If not, I've had good experiences with kmod versions of NVIDIA drivers. The difference between this and a regular driver package is there's a stub compiled as a kernel module that then pull in all the proprietary shit. Lessens the chances of a breakage somewhat.
I have my doubts it's "bricked" and unrecoverable. Its a pain in the ass but get a live iso and put it on a usb. Boot it, mount your actual install partitions to folders then chroot into the root one. Roll back to a Nvidia driver that actually works using your package manager or just shift to Nouveau. I can't really comment on anything particular beyond that since I don't know what distro you're using and how its setup. You could also try setting nomodeset in your linux kernel during boot to see if it gets you into your install.

I appreciate you guys immediately jumping in trying to fix this (Believe me, I really do) but I don't think there is a whole lot that can be done about this as it's partially a hardware problem, the package manager is stuck on a loop (Meaning I cannot remove or purge packages) and I believe I know what the problem is but fixing it would defeat what I'm trying to accomplish here. I will place this wall of text in a spoiler because it's a long read.

Ok, here is the whole scenario:
I use Devuan with runnit and I wanna do a virtualized gaming environment. For this I have a motherboard with 4 m.2 slots, an AMD 5950x and 2 GPU's, an RTX 2060 super and a 2 GB GTX 1050, with the 2060 serving as a card for pass-through and the 1050 for general use of the main non-virtualized system.

The board comes with 2 16x slots where I had originally placed the cards and the system was originally working fine but after testing my virtual environment I came to the conclusion that there was something wrong at hardware level as all games and other apps were crashing and my config for the passthrough was working fine (Hell, both virtualized windows and linux recognized and installed the drivers for the RTX 2060 without issues). I read my motherboard manual and it claimed that placing 2 or more m.2 or msata disks on the board would disable the second X16 slot, which I believe was the cause of this issue, however the board said that 2 x1 PCI slots could be used if the motherboard was set to CPU mode instead of chipset mode, so I bought a mining raiser for the GTX 1050 as I didn't really care about the implication of performance loss on the non-virtualized environment.

I tested the raiser in the bios and found it working without issues, so I decided to make a clean install on my main partition with the gtx 1050 on the raiser and the 2060 on the available x16 slot. everything was going peachy for a while; I was allowed into the system with Noveau, I installed my firmware for other hardware parts, rebooted the computer twice, but when I installed the Nvidia drivers and rebooted the system it got stuck at the second stage of runit while trying to initialize the Nvidia persistence Daemon, I tried to solve this by going into recovery mode and removing the persistence daemon (I believe the package was nvidia-persistenced or something?) and that allowed me into the system again but instead of the nvidia drivers it loaded noveau again. So I installed the package again and rebooted, only for the same thing to happen.

At this point with runnit stuck trying to start the daemon I went into recovery mode again and this time purged everything nvidia related, I then tried to re-install the drivers but when I did so it once again started the nvidia persistence daemon (This time inside apt) so after waiting about 15 minutes I forcefully shut down the PC. When trying to purge the drivers again I was told apt was interrupted and therefore it couldn't do anything until the problem regarding the last installation was fixed, I tried running the usual apt repair commands but all of them would lead to apt trying to start the persistence daemon and getting stuck, therefore until the daemon is resolved (Which will obviously not happen) I cannot use apt for anything and I cannot boot into the system. Joy.

What I believe is causing the issue:
Nvidia's drivers are throwing a hissyfit at having to run the GTX 1050 at 1x; Probably because I'm using a mining raiser and NVIDIA's anti-mining bullshit is kicking in and running the card at a speed way lower than it should or some other nonsense. I could try installing an older version of firmware but I would require to use apt or dpkg, which, I cannot use and most importantly doing so has nuked my x-session in the past, so it's a gamble on whether or not that would even work.

The possible solution that I have in mind:
I believe the issue can be fixed if I were to remove the RTX2060, place the GTX 1050 on the 16x slot, set the motherboard to chipset mode and start the system as normal, but that would defeat what I'm trying to accomplish here as I need two graphics cards on the system, therefore I'm just gonna save some cash and buy some cheap used rx 550 or something, honestly I'm tired of having to deal with Nvidia's bullshit on linux, so regardless of whether or not this has a solution it's something I will end up doing at some point anyways. After I get that stupid graphics card I will place the GTX 1050 into the 16x slot, boot normally, nuke everything nvidia related and do everything on the non-virtualized system with the AMD card.

Alternatively, I could do the same but remove the NVIDIA drivers and work with Noveau drivers for a while, but honestly I think I would rather use my test server as my main pc for a while because I hate the fucking noveau drivers.

TL;DR: Nvidia sucks dicks and I don't wanna bother with them anymore, I don't even care how much better performance they have over AMD, the headaches that they have caused me over the years isn't worth it.
 
but when I installed the Nvidia drivers and rebooted the system it got stuck at the second stage of runit while trying to initialize the Nvidia persistence Daemon
Is nvidia-persistenced part of the GPU passthrough scheme or something? Judging by its own description, it might be getting confused with both cards enabled:

Whenever the NVIDIA device resources are no longer in use, the NVIDIA kernel driver will tear down the device state. Normally, this is the intended behavior of the device driver, but for some applications, the latencies incurred by repetitive device initialization can significantly impact performance.

When persistence mode is enabled, the daemon holds the NVIDIA character device files open, preventing the NVIDIA kernel driver from tearing down device state when no other process is using the device. This utility does not actually use any device resources itself - it will simply sleep while maintaining a reference to the NVIDIA device state.
Granted, I don't have a double GPU setup, but I don't even have it installed as a package. Have you tried booting without it/uninstalling it outside of an X session on a TTY?
 
Is nvidia-persistenced part of the GPU passthrough scheme or something? Judging by its own description, it might be getting confused with both cards enabled:
Granted, I don't have a double GPU setup, but I don't even have it installed as a package.
Whatever is going on is a completely unrelated issue, as I mentioned I made a clean install on my main partition, meaning I had yet to get to the part of doing the passthrough and other things and I was just installing firmware and configuring the basis of the system . As a matter of fact I did a clean install because I knew at least two services would throw a hissy-fit at boot because I had made hardware changes and I didn't want to deal with them; One of them being GRUB (Which I had edited previous to the clean install and is required for the passthrough) and fstab (Which would've likely stopped working because I moved the disks on the motherboard as per the specifications on the instruction manual).

When you have two NVIDIA cards and you install the drivers the drivers generate an xorg.conf file, if the second card has a display connected then it gets added into this config file with its' respective display but if it doesn't the file just contains the basic info of the device being loaded into the system and there really aren't a whole lot of ways to interact with it, I remember the last time I managed to get a boot with both cards at once (When they were both on the X16 slots) nvidia-settings didn't even report anything on the card I was gonna use for passthrough other than temperature and fan speed.
Have you tried booting without it/uninstalling it outside of an X session on a TTY?
Yeah, and that's what lead to the package manager being stuck on a loop, when starting up in recovery mode devuan only offers a root terminal for maintenance. I guess I could start with an external bootable drive but my magic crystal ball is telling me that even if I remove the packages related to nvidia drivers and install them again the same thing would happen again as it has already happened twice.
 
I'm not entirely sure what's going on on that system (nvidia persistence daemon? I don't even want to wager a guess) but you should bind one card to the pcistub/vfio-pci driver per kernel command line to make sure nothing touches it and the drivers don't get confused and the card doesn't get initialized before the virtual environment can boot it. In my experience (which is AMD and a few years old by now, I used to have a setup with a headless linux installation passing it's only graphics card through to a win VM, optionally "going headless" when that is going to happen - yes when you script it carefully you can handle one card back and forth between main system and VM ) when a card is initialized once and bound to a driver it's an absolute coin toss (or used to be) if you can unbind and initialize it again without the entire computer just crashing.

Now mind you, if the UEFI firmware decides the card you *don't* want Linux to use to use for itself, it'll be more complicated than that because of handling off an "already running" card from the firmware to the kernel.

My personal advice is with a card and computer that beefy just play everything in wine, on average it'll be less painful.
 
Last edited:
Does there happen to be any release notes?

Edit: found it https://discourse.ubuntu.com/t/jammy-jellyfish-release-notes/24668
I'm surprised no one mentioned this considering how much disdain there is towards Snaps. Looks like Canonical is forcing it down people's throats now.

ohsnap.jpg
 
  • Horrifying
Reactions: nah
I'm not entirely sure what's going on on that system (nvidia persistence daemon? I don't even want to wager a guess) but you should bind one card to the pcistub/vfio-pci driver per kernel command line to make sure nothing touches it and the drivers don't get confused and the card doesn't get initialized before the virtual environment can boot it. In my experience (which is AMD and a few years old by now, I used to have a setup with a headless linux installation passing it's only graphics card through to a win VM, optionally "going headless" when that is going to happen - yes when you script it carefully you can handle one card back and forth between main system and VM ) when a card is initialized once and bound to a driver it's an absolute coin toss (or used to be) if you can unbind and initialize it again without the entire computer just crashing.

Now mind you, if the UEFI firmware decides the card you *don't* want Linux to use to use for itself, it'll be more complicated than that because of handling off an "already running" card from the firmware to the kernel.
As I mentioned I had yet to get to the part of passing through a GPU but now that you mention it maybe I should. I remember seeing in the stage 1 of runnit nvidia trying to load the drivers twice and after extracting the dmesg log from the disk with a bootable drive it seems like that's the case.
[ 1.005202] Freeing unused kernel image (text/rodata gap) memory: 2040K
[ 1.005394] Freeing unused kernel image (rodata/data gap) memory: 624K
[ 1.026502] x86/mm: Checked W+X mappings: passed, no W+X pages found.
[ 1.026505] Run /init as init process
[ 1.026506] with arguments:
[ 1.026506] /init
[ 1.026506] with environment:
[ 1.026507] HOME=/
[ 1.026507] TERM=linux
[ 1.026507] BOOT_IMAGE=/boot/vmlinuz-5.10.0-13-amd64
[ 1.033402] udevd[315]: starting version 3.2.9
[ 1.033827] udevd[316]: starting eudev-3.2.9
[ 1.043425] acpi PNP0C14:02: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[ 1.043974] udevd[321]: Error running install command 'modprobe -i nvidia-current ' for module nvidia: retcode 1
[ 1.044541] udevd[325]: Error running install command 'modprobe -i nvidia-current ' for module nvidia: retcode 1

[ 1.044738] piix4_smbus 0000:00:14.0: SMBus Host Controller at 0xb00, revision 0
[ 1.044846] piix4_smbus 0000:00:14.0: Using register 0x02 for SMBus port selection
[ 1.044881] piix4_smbus 0000:00:14.0: Auxiliary SMBus Host Controller at 0xb20
[ 1.048705] SCSI subsystem initialized
[ 1.048884] nvme nvme0: pci function 0000:01:00.0
[ 1.048922] nvme nvme1: pci function 0000:09:00.0
[ 1.048957] nvme nvme2: pci function 0000:2c:00.0
[ 1.050411] ACPI: bus type USB registered
[ 1.050422] usbcore: registered new interface driver usbfs
[ 1.050426] usbcore: registered new interface driver hub
[ 1.050436] usbcore: registered new device driver usb
[ 1.053825] libata version 3.00 loaded.
[ 1.057840] nvme nvme2: missing or invalid SUBNQN field.
[ 1.057855] nvme nvme2: Shutdown timeout set to 10 seconds
[ 1.059268] nvme nvme2: 8/0/0 default/read/poll queues
[ 1.060781] nvme2n1: p1 p2
[ 1.072102] nvme nvme1: allocated 64 MiB host memory buffer.
[ 1.090965] nvme nvme1: 8/0/0 default/read/poll queues
[ 1.097117] nvme1n1: p1
[ 1.124045] nvme nvme0: allocated 64 MiB host memory buffer.
[ 1.169966] nvme nvme0: 16/0/0 default/read/poll queues
Maybe what's going on here is that the system cannot decide which card is the one with direct output and tries to load both at once with the same priority. I think if I were to prove this I would need to first update grub with the card iommu info, reboot, install the nvidia drivers, create the vfio.conf file in modprobe, rebuild initramfs and reboot in that order (Writing this here just to remember), I think I will have a single shot at this or else the PC will be bricked in a different way. I'll give it a shot tomorrow, I'm too drunk today.
My personal advice is with a card and computer that beefy just play everything in wine, on average it'll be less painful.
My personal advice is for you to stop taking advice from the Arch community, if there is no suffering and spending hours solving an issue with no concise online answers you aren't using Linux properly. Don't let your overly-complicated meme projects be dreams.
I'm surprised no one mentioned this considering how much disdain there is towards Snaps. Looks like Canonical is forcing it down people's throats now.

View attachment 3227939
The best part is that AppImages currently don't work on the main release (I kind of get why they did this though), so you cannot use the most popular competing format either. Man, having corporate interests in open source is great.
 
  • Feels
Reactions: TheBest
I'm surprised no one mentioned this considering how much disdain there is towards Snaps. Looks like Canonical is forcing it down people's throats now.

View attachment 3227939
Snaps is one of the biggest reasons why I moved away to an Arch based system. I seriously don’t get the deal with it, why not just put the software in the main repository? It’s a pain in the rear having to have a few extra unnecessary steps to install something as common as Firefox.
 
Snaps is one of the biggest reasons why I moved away to an Arch based system. I seriously don’t get the deal with it, why not just put the software in the main repository? It’s a pain in the rear having to have a few extra unnecessary steps to install something as common as Firefox.
The Snap version of Firefox also just sucks hard. I was using it for a bit and every time I tried to attach a file to a post here it shat itself and crashed, and then when it 'recovered' from the crash all of the theming and window decorations were gone. In the end I uninstalled the Snap and just installed Firefox normally from the standard repo.
 
I take issue with this, I don't have an IBM laptop, my "free" system is a Dell Wyse Cx0 C90LEW; a 32 bit monocore computer with an IDE interface for hard drives and 1 GB of ram. IBM Laptops are overrated.
The IBM branded ThinkPads are the nicest laptops. Even being discontinued they are still better than any of the modern brands with how well they are made. I get IBM gets off on spinning off business to keep their company lean thus we now have CCP apparatus Lenovo but I genuinely miss those computers and wish they would come back.

They were sturdy, had great keyboards, good thermals, notable long battery lives at the time, a reliable keyboard touch-ball, had very slick looks, and were attractively priced.

The only reason I use a Mac for a mobile computer is they are closest thing in quality.
 
  • Disagree
Reactions: Madre Muerte
The Snap version of Firefox also just sucks hard. I was using it for a bit and every time I tried to attach a file to a post here it shat itself and crashed, and then when it 'recovered' from the crash all of the theming and window decorations were gone. In the end I uninstalled the Snap and just installed Firefox normally from the standard repo.

That makes it even worse imo if you have two different repositories for the same program because you know one is going to lag behind. I can imagine them backpedaling on this, though, if the snap version is really bad. It just sounds like putting more work in it than it needs to be. I can’t wrap my head around this dumb logic.
 
Official Fedora spin, decided to SSH into somebody else's remote system. Sorted out my key authentification, first attempt and permission denied. Kinda weird, but alright. Some time later; checked everything on the server, double checked the keys, all that's left is my OpenSSH. I open up my /etc/ssh and there's a user config file named "redhat.conf" in my config.d that overrides the default cryptography settings. Moved it away, retried the connection, everything works. Very funny, scarlet hat faggots. Fuck you.
 
Why the buttfucking hell is systemD now in Ubuntu? Why do I have some dumbass all-encompassing app for network config beyong /etc/network/interfaces?
Did Red Hat buy the company or did Canonical get pressured by the glowies as well?!

I leave for a CentOS house for three years and come back to a Debian-based clone of it now, smdh.
 
Macs are fragile, have a crappy keyboard, have horrible thermals, and lack a nub.

How are they anything like a Thinkpad?
The M1 Macs improved the keyboard and fixed the thermals since they stopped using Intel. My work HP Z-Book laptop likes to get very hot and noisy, so I'm willing to give Apple the benefit of doubt on why the Intel Macs got too hot (All my HP work laptops since 2015 have been hot garbage).

I think the aluminum body is fairly sturdy just I would never use a MacBook Air since they are fragile.

I'll spin the question on you, what would you think is closer to ThinkPad quality?
 
Last edited:
  • Dumb
Reactions: Madre Muerte
I'm surprised no one mentioned this considering how much disdain there is towards Snaps. Looks like Canonical is forcing it down people's throats now.

View attachment 3227939
Another great reason to avoid Ubuntu. Ubuntu came out after I was well versed in Linux, so it held no appeal for me. As the years progress, every step they make further solidifies my desire to avoid it.
 
  • Agree
Reactions: Car Won't Crank
The IBM branded ThinkPads are the nicest laptops
Allow me to interject with a plug for XY Tech: https://www.xyte.ch/
This Singaporesian fellow sells full builds of modernized "classic Lenovos" - not going all the way back to IBM days but it's the best thing going if you ask me. They all support coreboot as well, for all you GNU+Stallman types.
 
Back