The Linux Thread - The Autist's OS of Choice

Theoretically some older programs might flip out if you have no form of swap available whatsoever. I haven't heard of this happening in a long time though. You don't need a dedicated partition for it anymore.
Whaaaat? How is that even possible, barring a sysadmin-like program used for managing swap partitions? The whole point of swap and VMM in general is the transparency.
 
If you have overcommit on, why would you be required to have swap? I suggest having it though, if you have a fast enough disk. It really helps avoid disk caches being reclaimed.
 
Look's good I might boot into it for shits and giggles and report back
Prepare to be bored. I don't think it comes with any interesting g programs. It's literally a shell, (xfce I think, if you care about window managers) and some basic programs. That's what boomers need... as close to a "web terminal with solitaire" as you can make it.
 
you know anything fun
I guess it depends on what you mean by "fun". 99% of my Linux use is at the command line. In most distros I use I don't even run X unless I have to. I guess Kali or Deft Linux are interesting for their suites of digital forensics and penetration testing programs to play with. It can also be "fun" to see how small you can get a Linux installation to be, and there are many distros that are geared towards that idea. Some of them even run entirely in RAM. Beyond that I'm not much help; I think complicated, large-scale rsync operations across the internet are fun.
 
  • Thunk-Provoking
Reactions: Had
Whaaaat? How is that even possible, barring a sysadmin-like program used for managing swap partitions? The whole point of swap and VMM in general is the transparency.
I was trying to run an ancient MUD server once that would check /proc/meminfo to determine how to best manage its own memory buffers. If you didn't have any swap space at all, it would just get confused and refuse to run. Easy enough to fix with a quick source edit though.

In modern programming everything is best left to the kernel (outside of real-time systems I guess.)
 
  • Informative
Reactions: Considered HARMful
Image unrelated
1626070837790.png
 
I just ran into a situation that's deserving of a special serving of contempt:
This article is for Ubuntu, but it affects any Debian-based distribution, and for all I know, any distribution whatsoever apart from the meme ones where you recompile everything from source.
Apparently libcurl3 and libcurl4 are configured to be mutually exclusive, and yet there's still plenty of common software that requires each. And there's no way of fixing it that isn't some sort of disgusting hack, especially if you're dealing with closed-source binaries.

I ended up using elfpatch to mess with library search paths for the one or two binaries that need to use a different version. Ugh.
 
I just ran into a situation that's deserving of a special serving of contempt:
This article is for Ubuntu, but it affects any Debian-based distribution, and for all I know, any distribution whatsoever apart from the meme ones where you recompile everything from source.
Apparently libcurl3 and libcurl4 are configured to be mutually exclusive, and yet there's still plenty of common software that requires each. And there's no way of fixing it that isn't some sort of disgusting hack, especially if you're dealing with closed-source binaries.

I ended up using elfpatch to mess with library search paths for the one or two binaries that need to use a different version. Ugh.
If Nix wasn't such a pain in the ass I'd use it, but it looks like in the long term the way software is built and distributed in Linux is very fragile.
If you use Windows, you could probably run an exe from 15 years ago with little to no issues. Good luck doing that on Linux.
Is there a sane way out?
 
  • Disagree
Reactions: tehpope
If you use Windows, you could probably run an exe from 15 years ago with little to no issues.
Although in fairness, the reason for this is that Windows will do a much more comprehensive search for libraries. This increases flexibility but also adds more opportunities for an attacker to slip in an evil DLL and hijack the process.
If you're missing a library on Windows, or have a conflict, you can always just dump some DLL files in alongside the process if you really must.

You could emulate this behavior on Linux by adding . to LD_LIBRARY_PATH before running a binary, if you're willing to accept that behavior for every module in the process. But that seems unwise to me, considering that most Linux projects are not built and tested that way.

Good luck doing that on Linux.
Is there a sane way out?
Seems like the leading solutions at the moment are sandboxing (containerization, "jailing") via AppImage, Flatpak, etc, which essentially package up all the library dependencies and provide a cleverly-crafted chroot-esque environment where those libraries are available at the standard library locations.
Whether that's "sane" I leave to the reader to decide.
 
Although in fairness, the reason for this is that Windows will do a much more comprehensive search for libraries. This increases flexibility but also adds more opportunities for an attacker to slip in an evil DLL and hijack the process.
If you're missing a library on Windows, or have a conflict, you can always just dump some DLL files in alongside the process if you really must.

You could emulate this behavior on Linux by adding . to LD_LIBRARY_PATH before running a binary, if you're willing to accept that behavior for every module in the process. But that seems unwise to me, considering that most Linux projects are not built and tested that way.


Seems like the leading solutions at the moment are sandboxing (containerization, "jailing") via AppImage, Flatpak, etc, which essentially package up all the library dependencies and provide a cleverly-crafted chroot-esque environment where those libraries are available at the standard library locations.
Whether that's "sane" I leave to the reader to decide.
I choose to believe creating a portable image with all dependencies is sane. If you have to use black magic to make it work, it's the fault of the environment, not the image
 
Seems like the leading solutions at the moment are sandboxing (containerization, "jailing") via AppImage, Flatpak, etc, which essentially package up all the library dependencies and provide a cleverly-crafted chroot-esque environment where those libraries are available at the standard library locations.
Whether that's "sane" I leave to the reader to decide.
It's a good option for programs like games that have zero need to interoperate with other programs or containers on your system.

For anything else, you get to enter into the world of Docker Compose.
 
  • Agree
Reactions: Some JERK
I was running freetube on Linux Mint for months and it stopped working after YouTube made changes. Is there any similar equivalent? I use invidious but I miss having a subscriptions page
 
This increases flexibility but also adds more opportunities for an attacker to slip in an evil DLL and hijack the process.

To be fair, if an attacker has the ability to write into the filesystem outside of very specific locations, you are already well and truly pwned.

I've gotten back to work in preparation for switching away from Microshit for good.

This is where I am. I had a neat VFIO GPU passthrough thing for a while (before the inevitable upgrade fucked it), but this was back when I gave a shit about multiplayer PC games that are usually loaded with virtualization-hostile anti cheat. It's getting to be time to try again.
 
Last edited:
  • Like
Reactions: 419
If you use Windows, you could probably run an exe from 15 years ago with little to no issues. Good luck doing that on Linux.
The kernel API is very stable, if it's statically compiled or all the linked libraries are included (like you have with most Windows software) it will run fine.

Edit: As an example, here's the oldest Opera build I could find from 11 years ago:
Works absolutely fine on a modern bleeding-edge system.
 
Last edited:
If I decide to distribute an application on linux would it cause problems if I just added . to the executable's search path upon compilation and included the required libs in the same folder like .dlls on windows?
 
If I decide to distribute an application on linux would it cause problems if I just added . to the executable's search path upon compilation and included the required libs in the same folder like .dlls on windows?
That would be cwd-related, therefore won't work reliably. You need to actually find the absolute path. For example, try reading the /proc/self/exe symlink
 
  • Like
Reactions: Coolio55
If I decide to distribute an application on linux would it cause problems if I just added . to the executable's search path upon compilation and included the required libs in the same folder like .dlls on windows?
If you're doing that why not just statically link the libraries?
 
If I decide to distribute an application on linux would it cause problems if I just added . to the executable's search path upon compilation and included the required libs in the same folder like .dlls on windows?
It's not just the executable that has a search path though. That's exactly the problem I ran into - if the executable's dependencies themselves have dependencies, you'll have to deal with each and every one of their search paths somehow too.
 
Back