The Linux Thread - The Autist's OS of Choice

how well does Linux handle choosing dependencies for mix and match configurations? like if you're installing a DE how does it know to install some DE specific dependencies if wayland is installed or to install different dependencies if X11 is installed?
For the most part, it doesn't. Most of the dependencies are developed to work with wayland and/or X11 so the one package will work in both cases. Debian's packages have eight relationships: Depends, Recommends, Suggests, Pre-Depends, Build-Depends, Build-Depends-Indep and Build-Depends-Arch https://www.debian.org/doc/debian-policy/ch-relationships.html There are no other ways to describe dependency.

Gentoo, naturally, expands the spectrum a bit. DEPEND="gtk? ( =x11-libs/gtk+-2* )" creates a dependency on gtk+ if the gtk USE flag is set.

This is how two distroes handle things. But yeah, varies depending on distro.
 
What the fuck is up with the version of zsnes in the software manager?
It;s a flatpak. It includes all the dependencies needed by the app to create a semi-isolated environment.

It took me like 5 minutes to edit this screenshot so it only showed the box.
.What desktop environment are you using? Most screenshot tools allow you to just select the area of the screen you want.
 
Last edited:
It;s a flatpak. It includes all the dependencies needed by the app to create a semi-isolated environment.


.What desktop environment are you using? Most screenshot tools allow you to just select the area of the screen you want.
I just pressed the print screen button, saved the screenshot, and then went to edit in in Drawing. Fortunately, KolourPaint is basically a 1:1 with Paint, which is why I edited out that bit from my post.
 
Gonna switch back to Linux after I've switched my graphics and CPU to AMD.

Is Manjaro still a good choice for an Arch based Linux? Last time that was the only distro in which Waves plugins and Superior Drummer 3 worked without issues.
 
  • Feels
Reactions: YoRHa No. 2 Type B
Is Manjaro still a good choice for an Arch based Linux?
No, their maintainers have been quite terrible

EndeavourOS is far more appropriate (still Arch based but with good maintainers)

You can also use plain old Arch but use the archinstall command and get a desktop environment and whatnot installed without the middlemen pretty easily
 
I saw news about the decline in desktop usage of FreeBSD and calls to address it about half a year ago, IIRC. For some reason, I can't find the story now, but I remember thinking at the time that nothing would come of it. Turns out I was wrong on that because there have recently been some major efforts to improve desktop use of FreeBSD. A project was launched by the FreeBSD Foundation a few months ago to address laptop challenges for FreeBSD. The classic issues of Wi-Fi drivers and HDMI support is now actively being worked on. Monthly updates have been steady & ongoing and in total $750,000 has been committed to improve the experience of laptop users who run FreeBSD.

In the latest June update they announced that they are now committed to extending the FreeBSD installer to offer a minimal KDE based desktop option that on completion will bring the user directly to a graphical login. This will ship with the next version of FreeBSD. - From the FreeBSD Foundation's Project Laptop GitHub repository:
1753877084984.webp
 
No, their maintainers have been quite terrible

EndeavourOS is far more appropriate (still Arch based but with good maintainers)

You can also use plain old Arch but use the archinstall command and get a desktop environment and whatnot installed without the middlemen pretty easily
I might consider Arch, but last time I tried it and installed updates the fucker wouldn't boot anymore. Any tips to prevent this shit in future?
 
I might consider Arch, but last time I tried it and installed updates the fucker wouldn't boot anymore. Any tips to prevent this shit in future?
Unfortunately, in the Linux world, because of the breadth of ways in which this could happen, there's no general tips for this. The advice is the bog-standard "git gud". Grub is very potent at making broken systems boot; getting good at Grub means you can repair your own system. If you learn Grub and set up an EFI boot stub ( https://wiki.archlinux.org/title/EFI_boot_stub ), you can use your BIOS (ie. EFI firmware) to bypass if Grub gets clobbered.
 
Unfortunately, in the Linux world, because of the breadth of ways in which this could happen, there's no general tips for this. The advice is the bog-standard "git gud". Grub is very potent at making broken systems boot; getting good at Grub means you can repair your own system. If you learn Grub and set up an EFI boot stub ( https://wiki.archlinux.org/title/EFI_boot_stub ), you can use your BIOS (ie. EFI firmware) to bypass if Grub gets clobbered.
Yeah I probably have to just RTFM at this point or just go with Endeavour.
 
Most distro kernels will have the EFISTUB active by default, even if they don’t usually use it. It’s pretty neat, lets you chainload any distro from GRUBs console rather than navigating the less featureful EFI terminal.
 
  • Agree
Reactions: YoRHa No. 2 Type B
Can someone redpill me on why containers are ass? I've only dicked around with Docker a bit
Number of reasons.

If you're running a VM, you're now stacking virtualization on top of virtualization. That causes resource contention issues. Some things are very finicky about this, and IO latency (which is increased exponentially the more abstraction you have between the application and physical layer) can easily be a killer.

The userland proxy you have to use is fucking atrocious.

Every container has to have redundant code in it. If you run everything outside of containers, as standalone applications, they can use system calls to talk to each other. Using internal sockets, dropping files for each other, sending signals, etc. Because each container is isolated, each container has to have a network stack with it. That means each container has duplicate code, consuming extra CPU, to send slower communications calls to other containers, as opposed to quick, lightweight internal system calls.

Security is also a nightmare too. Unless you're (re-)building each container by hand each time you're deploying it, constantly updating your manifests for updated software, you're potentially deploying containers containing old libraries that may be riddled with vulnerabilities and performance issues. And if you're relying on third party containers for stuff, you have no idea if it's actually doing what it says on the tin or contains lovely code from russianhacker.org. There's been a huge number of studies showing that public containers are out of date. So you take that gamble too.

I'm sure there's more but that's the stuff off the top of my head.
 
You just use apt-get autoremove, it's trivial and implemented in a safe way where you can't do this by accident. The DPKG/APT tools even suggest doing this when you remove packages to tidy up no longer required dependencies.

This may be broken by stupid decisions by the Ubuntu niggers to mark ALL packages installed by the initial installer, including dependencies, as manually installed.
You can also use Aptitude to "clean up" stuff that apt/apt-get miss and fix other issues like changing packages from "auto" installed to "manual" and vice versa.

I've found Aptitude to be one of the most useful tools for debian-based systems that nobody seems to use or mention.
 
If you're running a VM, you're now stacking virtualization on top of virtualization.
No, and that's the whole point. Unless you're using Docker Desktop on Windows, then you get what you deserve. You're directly running the code and calling the kernel, same as any other Linux process just using native kernel tools like namespaces to keep you from messing with other stuff on the system. Like Solaris Zones and BSD Jails and other mechanisms from the distant past. Sure, most container engines run the container itself in a layer like overlayfs which may slow I/O down there, but then you can present data directories directly. Also you can do fun things like give it direct GPU access without all the hassle of doing it in a VM. For instance, recently I upgraded a system to an OS with Python 3.13, thus horribly breaking the dependency chain that drives WhisperX. After a bit I realized there was a container and using it was as easy as:
podman run --user=`id -u` --userns=keep-id --gpus all -it -v ".:/app" -v "$HOME/whisper_cache:/.cache" ghcr.io/jim60105/whisperx:no_model -- --device_index ${INDEX:-0} --verbose=False --print_progress=False --hf_token $HF_TOKEN --model large-v3 --diarize --highlight_words True --lang en --output_dir "$D" "$1"
This still uses emulated networking, but there, like everything else, are other faster options available, but it also doesn't need a network once the models are downloaded.
 
Last edited:
i noticed on some distros like gentoo, if i installed sway by itself and tried to manually build everything myself, i would run into problems with some programs not launching, like some games in wine
if i installed mate instead, and then installed sway on top of it, the programs would work
however i would also have stuff like the mate power manager running in sway with no way to edit it outside of mate
i always just recommend people keep 1 desktop environment installed, especially if you're running an xorg DE and a wayland WM or vice versa, you'll need wayland equivilants of most of your xorg applications that shipped with your DE
Hard to tell exactly what caused this from your description. But it just sounds like dependency issues. It depends on how you did it exactly. Also if you install things with portage. The its packages it will likely install some extra stuff to make sure things work properly. That might not get installed without either having the Gentoo builds, or without handling the dependencies for everything that needs to work.

That or basically the same thing, but when you installed the mate package, it pulled in the needed dependencies, and set things up for you automatically, in a way that fixed the issues you had with only manually installing sway.
I saw news about the decline in desktop usage of FreeBSD and calls to address it about half a year ago, IIRC. For some reason, I can't find the story now, but I remember thinking at the time that nothing would come of it. Turns out I was wrong on that because there have recently been some major efforts to improve desktop use of FreeBSD. A project was launched by the FreeBSD Foundation a few months ago to address laptop challenges for FreeBSD. The classic issues of Wi-Fi drivers and HDMI support is now actively being worked on. Monthly updates have been steady & ongoing and in total $750,000 has been committed to improve the experience of laptop users who run FreeBSD.

In the latest June update they announced that they are now committed to extending the FreeBSD installer to offer a minimal KDE based desktop option that on completion will bring the user directly to a graphical login. This will ship with the next version of FreeBSD. - From the FreeBSD Foundation's Project Laptop GitHub repository:
View attachment 7712579
It would be nice if they actually pull it off this time. As a laptop, desktop OS user. Freebsd was a very noticable downgrade from any of the Linux distro I've used. At least in the last few years.
This still uses emulated networking, but there, like everything else, are other faster options available, but it also doesn't need a network once the models are downloaded.
There are a lot of options on how you handle things in containers. It depends how much security vs performance matters. You can share networking with the host if you want. Though it won't have the benefit of completely isolating it. If not there are still a few options on how to set it up. Depending on how things are done, it really shouldn't have a noticeable impact on CPU usage.
 
No, and that's the whole point.
All of this supposes that you're running Docker or some orchestration like Kubernetes on bare metal. Which most places are NOT doing. They're spinning up VMs, either on-prem or on cloud, and running the services there. Because the built-in EKS services in stuff like AWS don't work for them because the perms aren't enough. Plus a lot of those are running virtualized as well, so you're running Xen/etc. on top of the physical layer and then an abstracted EKS layer on top of that. And you just don't have access to the server underneath, because this is what all "serverless" architecture is.

And presenting data layers isn't a great argument. Now you're talking about passing system calls via I/O instead of through memory registers or CPU interrupts, and through an abstracted data layer at that. You think a series of applications that drop files to communicate between each other over NFS works well? Like fuck it does.
 
Depending on how things are done, it really shouldn't have a noticeable impact on CPU usage.
In the beginning there was "slirp4netns" for isolated rootless container networking which was a horrible horrible hack to allow network access. It's really bad with trips between user space and kernel space before your packet sees the light of the Internet. Apparently "pasta" is supposed to be better. But both will suffer compared to host networking(losing isolation) or root enabled containers with proper veth networking.

But, for many container uses it doesn't much matter.
abstracted EKS layer on top of that.
But EKS/K8S isn't additionally virutalized, it's just orchestration. The container has no additional virtualization overhead compared to running podman or a Linux process on a VM host. It will, obviously have some I/O overhead for some options, especially using overlay networking. But volume IO should be as fast as it is on the VM without K8S/Podman/Docker.

We are finally seeing customers realize they can run K8S on bare metal without paying the VMWare tax, but it's a long journey.
 
Last edited:
  • Like
Reactions: One Angry Computer
But EKS isn't virutalized, it's just orchestration.
36135e9e-a6b1-4242-8fc2-154482ae7f65_text.gif

You absolutely should not use any virtualization when doing containers. That much we agree upon (because why stack competing technologies on top of each other?). However by the same metric, it doesn't provide any benefits. Only drawbacks.

The only time there's a benefit is when you are truly gargantuan. Talking planet-scale. Google, Microsoft, Amazon, etc. When you're that big, you do need that much flexibility and distribution. 99.99% of people right now who use containers will never be that big. Ever. And foolishly planning to be that way is going to only cause problems.
 
Back