The Linux Thread - The Autist's OS of Choice

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
We know NVIDIA does some corner cutting on their less premium cards, at the very least they nerf the VRAM, yet I've never felt any problems from whatever they do to make more profits. Why? I have no idea. Maybe NVIDIA just have better build practices for their entire line up or perhaps they have better people working on them?

They don't skimp on VRAM simply to save a few $ from the BOM which would be the definition of "corner cutting", at least to my knowledge. They do it to fuck everyone who wants anything to do with something AI very very hard in the ass while making Jen-Hsun & Nvidia shareholders as rich as possible.

Also the new power connector they shitted out is garbage, then they did some actual corner cutting on the power delivery system on 4090s and 5090s to make it as likely as possible that those connectors and power cables will melt and cause fires.

Regardless, contrary to popular refrains, Nvidia is perfectly usable on Linux, and AMD has certainly had its own issues, even serious ones, on Linux with their supposedly so great and perfect open source drivers. I use a Nvidia card, and I'd probably buy Nvidia if I were buying a new card right now even though Nvidia is a shitty company, mainly since I'm not sure how well AI stuff (Stable Diffusion primarily) works on AMD.

edit: I suppose there's this against Nvidia on Linux if you care a lot about modern games...
 
Last edited:
okay, ive reinstalled tlp, because i know this triggers the config reload

i noticed that it doesn't wipe the contents of session.conf, but the settings dont actually take effect, the tap to click doesn't work, the kb is set to us, and lxqt-config-input reflects that
but it starts working again after a restart
i dunno whats going on
If those are the settings you are changing. I really do just recommend making xorg.conf.d files and setting them from there. You will have to set them with config files. Instead of a GUI but it should just work. And not change. It's how I do it and they just always stay how I set them.

You can look up how to do it for what you want with man xorg.conf (IIRC) and of course the arch wiki pages. For libinput I think. And xorg config files to see the names of the settings. And the format
 
Is there like a diff detect program that scans your entire computer and tells you all files that are different after each scan? It would be a slow process though
Do you want to see the diff or do you just want to know if the file has changed? To do a diff would mean you would lose 50% of your storage (maybe a little less if compression is used).

If you only care about knowing a file has changed, you could do a hash of the file and compare hashes. Search for something called a file integrity checker.

One of the first tools I remember doing this was Tripwire. This was long before Github but there is a release there: https://github.com/Tripwire/tripwire-open-source
 
Is there like a diff detect program that scans your entire computer and tells you all files that are different after each scan? It would be a slow process though
Could pipe find / into md5sum or whatever and store the output to a file, then check against that file next time

That alone won't detect new files though

If you use a filesystem that can do snapshots then you might be able to do a diff between snapshots, ZFS has this (would be much faster)
 
Is there like a diff detect program that scans your entire computer and tells you all files that are different after each scan? It would be a slow process though
Following up on the hash point from before, it'd be pretty trivial to use your preferred programming language to write something to do a dirwalk or take a dirwalk in as input and compare hashes from the previous run and output changed files or do a diff on the files in question.
You could save time by skipping files that haven't been modified since last check and hash/diff only the newly updated files. The first run would take some time but subsequent runs would be relatively fast.

I'm not aware of a utility that already does this but find, md5sum, diff and something like awk or jq (depending on input parsing) are the main tools for making a script that could do it.
 
Is there like a diff detect program that scans your entire computer and tells you all files that are different after each scan? It would be a slow process though
Do you need to check hashes to make sure the file contents are different, or is just checking whether the update time metadata is different good enough? If the latter you can just use find to see a list of all changed files. Open a terminal in the root directory and use find -cmin -N and it will list every file whose update time is within the last N minutes, or use find -ctime -N and you'll get every file that updated within the last N days. You can add -type f to make the search for only files (no directories or links). There's probably some more specific you can do with it, I just know how to do this when I know a file updated or was created somewhere and I want to find where.
 
Also the audio would often devolve into cracking.
Recent kernel changes for power management causes problems for a lot of sound and wifi chips. It can be partially turned off with GRUB_CMDLINE_LINUX_DEFAULT="quiet splash mitigations=off pcie_port_pm=off"
 
Is there like a diff detect program that scans your entire computer and tells you all files that are different after each scan? It would be a slow process though
This usually isn't the most efficient way to deal with something like this.

The strace utility lets you trace _all_ system calls that a process is making, and dump those to a file- you would probably want to filter those down to those related to file operations however. Which is confusing. However, there are plenty of tutorials around on how to use it, I just google or ask Chinese AI to give me an appropriate command whenever I need to.

Of course... we're talking about The Linux Desktop, so there's every chance that a utility that supposedly updates configurations does it by molesting some daemon via DBUS to ask it to change a config file rather than just... directly updating a config file. If so, you can use inotifywait to do it- what I have saved is something like this:
inotifywait -m -r -e modify,create,delete ~/.config/

depending on where you think will be touched, you might need to add ~/.local/
(or if it's actually modifying system files, run it as a superuser and add /etc/ to the end of the command)

Either way- you can get to see the changes made in realtime, rather than having to guess retrospectively.
 
Is there like a diff detect program that scans your entire computer and tells you all files that are different after each scan? It would be a slow process though
This is called file auditing and it's somewhat commonly used in corporate IT security. Windows has it built in, but I'm not sure what tools are used for this on linux.
 
Recent kernel changes for power management causes problems for a lot of sound and wifi chips. It can be partially turned off with GRUB_CMDLINE_LINUX_DEFAULT="quiet splash mitigations=off pcie_port_pm=off"
Nah, this has been an AMD thing for a while. Well, assuming recent isn't all of the last 5 years.
 
Following up on the hash point from before, it'd be pretty trivial to use your preferred programming language to write something to do a dirwalk or take a dirwalk in as input and compare hashes from the previous run and output changed files or do a diff on the files in question.
You could save time by skipping files that haven't been modified since last check and hash/diff only the newly updated files. The first run would take some time but subsequent runs would be relatively fast.

I'm not aware of a utility that already does this but find, md5sum, diff and something like awk or jq (depending on input parsing) are the main tools for making a script that could do it.
there is a tool called rkhunter. that basically does this. It also checks for rootkits throughout the filesystem.

Also the kernel has the integrity subsystem, which as far as I know is made to to just this. I have never used it though, because I really don't need it. It might work alongside the audit subsystem, to check if files have been changed, but I'm just guessing.

Actually I just looked it up to see what I could find. It looks like there is some redhat documentation on it.


 
there is a tool called rkhunter. that basically does this. It also checks for rootkits throughout the filesystem.

Also the kernel has the integrity subsystem, which as far as I know is made to to just this. I have never used it though, because I really don't need it. It might work alongside the audit subsystem, to check if files have been changed, but I'm just guessing.

Actually I just looked it up to see what I could find. It looks like there is some redhat documentation on it.


The author picture gives me sex pest or mudslime vibes:
huzaifa.webp
Oh wait.
Huzaifa Sidhpurwala
Jeet muslim.
 
We may bicker and quarrel over our operating systems, and our personal hardware, but what about the servers we connect to?

Let's imagine we're running a source-based distribution as an attempt to avoid any foreign actors within our system. But how can the security of their repository be guaranteed? You may say, "reference the checksums" however what if a hypothetical backdoor messed with those too and what about simple or unverified files? The package maintainers probably wouldn't know if one of the many proprietary blobs in either the software or hardware was being used as an attack vector; is there any way to truly know the security of your computer without also guaranteeing the security of the underlying hardware?

Is the entire Internet backdoored and does any provably backdoor-free server or computer hardware even exist in the first place?
 
We may bicker and quarrel over our operating systems, and our personal hardware, but what about the servers we connect to?

Let's imagine we're running a source-based distribution as an attempt to avoid any foreign actors within our system. But how can the security of their repository be guaranteed? You may say, "reference the checksums" however what if a hypothetical backdoor messed with those too and what about simple or unverified files? The package maintainers probably wouldn't know if one of the many proprietary blobs in either the software or hardware was being used as an attack vector; is there any way to truly know the security of your computer without also guaranteeing the security of the underlying hardware?

Is the entire Internet backdoored and does any provably backdoor-free server or computer hardware even exist in the first place?
Possibly Erebus and the Huawei chips, but those are very hard to get in NA
 
is there any way to truly know the security of your computer without also guaranteeing the security of the underlying hardware?
No, I mean everyone knows about Intel ME (which, unsurprisingly, gets disabled on government computers...)

If you're at the point where you're worrying about hardware backdoors then you're kinda completely fucked anyway.
 
Let's imagine we're running a source-based distribution as an attempt to avoid any foreign actors within our system. But how can the security of their repository be guaranteed? You may say, "reference the checksums" however what if a hypothetical backdoor messed with those too and what about simple or unverified files? The package maintainers probably wouldn't know if one of the many proprietary blobs in either the software or hardware was being used as an attack vector; is there any way to truly know the security of your computer without also guaranteeing the security of the underlying hardware?
All open-source software requires relying on the openness, kindness of strangers. Life is pretty much the same. We didn't learn about the xz backdoor until a Microsoft engineer brought attention to it as he was doing ssh testing. It's also why you should probably not use obscure distros.
 
Back
Top Bottom