The Linux Thread - The Autist's OS of Choice

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Also because their headquarters aren't too far so I can shit in their mailbox if their product sucks.
i actually live near one of RedHat's offices funnily enough, i might actually do that if I'm ever drunk enough and courageous enough to shit in a mailbox in the city.
eat dust you furnigger.
you also got exposed for liking lolicon and you have a wank over children's feet.
just kill yourself
kawaii colorjakie-chan.


oh by the way, your name sounds like a rejected MLP name.
 
Said yolo and upgraded Fedora Silverblue to 41. No problems except for initially no sound which was fixed by deleting
~/.local/state/wireplumber
 
Upgrading is the biggest crapshoot in the world. It's always better to back your shit up and just do a clean install.
Indeed. When I upgraded Mint versions once, it made my system unbootable. I forget what the problem was, but it required me to boot with a live USB to repair the system. In the end I got it to work, but if you don't like dealing with upgrading/reinstalling fixed releases then a rolling release distro is better for you.
 
  • Agree
Reactions: std::string
Indeed. When I upgraded Mint versions once, it made my system unbootable. I forget what the problem was, but it required me to boot with a live USB to repair the system. In the end I got it to work, but if you don't like dealing with upgrading/reinstalling fixed releases then a rolling release distro is better for you.
for such a user friendly distro, you'd think they'd test it to make sure that upgrading works.
hell, it should at least have a recovery mode so the system can try and fix itself.
ive always had issue with that, you can't expect the user to know how something works. having troubleshooting and repair functions in an os will at least give the user a friendly way of trying to fix it.
maybe one day mint will add it, but I'm not holding my breath.
 
Indeed. When I upgraded Mint versions once, it made my system unbootable. I forget what the problem was, but it required me to boot with a live USB to repair the system. In the end I got it to work, but if you don't like dealing with upgrading/reinstalling fixed releases then a rolling release distro is better for you.
I always do a fresh install for every major LTS version. It helps keep stuff organized too.
 
for such a user friendly distro, you'd think they'd test it to make sure that upgrading works.
hell, it should at least have a recovery mode so the system can try and fix itself.
ive always had issue with that, you can't expect the user to know how something works. having troubleshooting and repair functions in an os will at least give the user a friendly way of trying to fix it.
maybe one day mint will add it, but I'm not holding my breath.
It was actually due to something I had changed myself. I forget what it was, but most people shouldn't have the problem. My point was more that upgrade tools like that sometimes don't take all the edge cases into account.
 
It was actually due to something I had changed myself. I forget what it was, but most people shouldn't have the problem. My point was more that upgrade tools like that sometimes don't take all the edge cases into account.
Which is why I like simpler distros and package managers like the system Arch uses. They usually don't mess with major configurations because they barely configure things for you to begin with. But that comes with its own problems, I like to micromanage my setup so that works for me but it's understandably a little hard to keep things predictable with a distro for beginners.

Rolling release systems like Tumbleweed and Gentoo have the additional advantage that anything that breaks will be relatively small compared to the changes made in an LTS release which may fuck the whole system and warrant reinstalling if you're really unlucky.

These aren't "better" but rather I prefer them for this reason. LTS and release schedules make a lot of sense for short-term stability and are more reliable in the long term too if your upgrade path is done correctly. I wouldn't trust Arch with my servers or a work laptop for example, I would pick Fedora for something like that. But for my main system where I'm messing with bootloaders and there's more fragile stuff like sound and graphics drivers and HIDs, I use a system that I continuously take responsibility for maintaining as I use it because it's easier to keep up with, instead of "I installed one update and now I have to nuke everything or I don't have a PC for a week".
 
Last edited:
Was just dicking around in Python today:
Python:
import re
import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry

with open('invalid_users') as f:
    lines = f.readlines()

for line in lines:
    m = re.search(r'Invalid user (?P<username>[^\s]+) from (?P<ip_address>[^\s]+)', line)
    ip_address = m.group('ip_address')

    session = requests.Session()
    retry = Retry(connect=3, backoff_factor=0.5)
    adapter = HTTPAdapter(max_retries=retry)
    session.mount('http://', adapter)
    session.mount('https://', adapter)

    response = session.get(f'https://geolocation-db.com/json/{ip_address}&position=true').json()
    print(response.get('country_name'))
invalid_users is a file that I made on my Digital Ocean Droplet from concatenating all the /var/log/auth* files and selecting first only the lines that contain "sshd" and then in a second round that go "Invalid user blahblahblah". You can easily achieve the same outcome with cat and grep but may have to alter or get rid of a few lines where the username is blank or contains whitespace. You might ask why I didn't use the username capture and didn't just get the IP addresses. I might analyze those in the future. Right now I'm just concerned with where these requests are coming from. There are a lot of the usual suspects (e.g. India, Huezil) but I'm even seeing a few surprises like some nigger from Angola that just went by on the terminal.

In a period spanning a little over three weeks there were over 53,000 such failed attempts.
 
Was just dicking around in Python today:
Python:
import re
import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry

with open('invalid_users') as f:
    lines = f.readlines()

for line in lines:
    m = re.search(r'Invalid user (?P<username>[^\s]+) from (?P<ip_address>[^\s]+)', line)
    ip_address = m.group('ip_address')

    session = requests.Session()
    retry = Retry(connect=3, backoff_factor=0.5)
    adapter = HTTPAdapter(max_retries=retry)
    session.mount('http://', adapter)
    session.mount('https://', adapter)

    response = session.get(f'https://geolocation-db.com/json/{ip_address}&position=true').json()
    print(response.get('country_name'))
invalid_users is a file that I made on my Digital Ocean Droplet from concatenating all the /var/log/auth* files and selecting first only the lines that contain "sshd" and then in a second round that go "Invalid user blahblahblah". You can easily achieve the same outcome with cat and grep but may have to alter or get rid of a few lines where the username is blank or contains whitespace. You might ask why I didn't use the username capture and didn't just get the IP addresses. I might analyze those in the future. Right now I'm just concerned with where these requests are coming from. There are a lot of the usual suspects (e.g. India, Huezil) but I'm even seeing a few surprises like some nigger from Angola that just went by on the terminal.

In a period spanning a little over three weeks there were over 53,000 such failed attempts.
These are just computing devices that weren't secure (i.e. 95% of consumer routers). That Angolan IP is probably the one that 10,000 niggers running Windows XP are on beyond some kind of shitty carrier-grade NAT. As @AmpleApricots says, if you don't want to see the fuzz, change the port to something non-default, at least 10,000+, better yet 49,000+, and install fail2ban if you want to see it cleaned up automatically.

If I ever get rich enough to waste enough money a month to pay for a 'stressor' with an API (rather than a webpage with captcha), I would set up fail2ban to call that as well as just adding an iptables block. Although if that was common practice it'd be rather open to abuse.
 
Which is why I like simpler distros and package managers like the system Arch uses. They usually don't mess with major configurations because they barely configure things for you to begin with. But that comes with its own problems, I like to micromanage my setup so that works for me but it's understandably a little hard to keep things predictable with a distro for beginners.
Doesn't Arch effectily force you to either manually compile your apps or to solely use their repository and only their repository? It seems like upgrading your OS is the one thing it won't have issues with.
 
Doesn't Arch effectily force you to either manually compile your apps or to solely use their repository and only their repository?
You could use extra repositories in theory but with Arch it becomes impossible because it's a rolling release distro which is unstable by definition. The packages are continuously updated so it becomes risky to try using packages of other repositories.
 
  • Agree
Reactions: The Anarki Main
You could use extra repositories in theory but with Arch it becomes impossible because it's a rolling release distro which is unstable by definition. The packages are continuously updated so it becomes risky to try using packages of other repositories.
I've never used Arch or derivatives but your post reminded me of experimenting a bit with FreeBSD and its Ports system. I am not passing judgment on FreeBSD in general but I came away with the opinion that Ports blows and it appears others much better acquainted with said OS often have similar attitudes, e.g.:
 
Doesn't Arch effectily force you to either manually compile your apps or to solely use their repository and only their repository? It seems like upgrading your OS is the one thing it won't have issues with.
Yes/no.
The AUR provides PKGBUILDs for you to make your own package. It's heavily built around building from source so that's what most developers do, but obviously for proprietary software that's not possible, so there's nothing stopping the package maintainer from simply including the binary or a download script in the PKGBUILD instead. You can also add any repo you want to pacman.conf, so if the developer provides an HTTP server with the binaries on it, you can just point it there and skip compiling from AUR entirely just like you would on Ubuntu.

There are also third party repos such as chaotic-aur that routinely precompile some of the most popular AUR packages for you, as well as AUR helpers which can automate the process of cloning repos and building packages should you find it annoying. Yay is by a long shot the most common AUR helper, because it just works and it's very familiar to anyone who already knows how to use pacman, infact it can even act as a frontend to pacman entirely. You can tell yay to install packages from core or extra and it will do it.

Recompiling packages from source and manually updating them is possible, and it's the intended way to do things out of the box if you just have git and none of the other tools. But the whole 9 yards is almost always automated in one way or another by most regular users and there are plenty of existing solutions to make it easy.

You could use extra repositories in theory but with Arch it becomes impossible because it's a rolling release distro which is unstable by definition. The packages are continuously updated so it becomes risky to try using packages of other repositories.
This is true for many system components, but in most cases the binary incompatibility is not going to be that bad. I can use the same build of Trinity desktop from 2+ months ago with the latest kernel and icu and it will run just fine.

It only really gets bad when, say, chromium and steam are out of sync, or perhaps nvidia and ffmpeg. If you're using external repos to download binaries for programs like davinci-resolve or multimc without a massive laundry list of sensitive and specific dependencies, then it will probably go a long while before having major problems.
 
Last edited:
This is true for many system components, but in most cases the binary incompatibility is not going to be that bad.
Either way if a program needs specific library versions one can use patchelf to set the appropriate interpreter then include the appropriate shared libraries. Mind you, this is mostly for closed source programs that expect to find a very specific environment like Ubuntu 22.04 for example.
 
Doesn't Arch effectily force you to either manually compile your apps or to solely use their repository and only their repository? It seems like upgrading your OS is the one thing it won't have issues with.
Its literally no different than any other linux distro. It has a big repo with a bunch of stuff you can use, or you can use flatpaks snaps or appimages if you want also.

But it also has the aur which was mentioned where a lot of stuff is source based, but it has things that are a pita to get working on other distros or don't exist on other distros, and the chaotic aur which was already explained.

Really the main difference between arch and anything debian/ubuntu based, is arch's packages aren't years out of date. Compiling your packages really doesn't have anything to do with it.
 
Back