Diseased Open Source Software Community - it's about ethics in Code of Conducts

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
That's a nontrivial amount of space, probably 20-30% of your log file without compression.
Sane people just routinely delete logs with a crontab entry anyway, so it's 20-30% of nothing. And if for some reason you're intensely concerned with keeping logs forever, it's about as easy just to have a crontab entry to zip them and move them somewhere.
People piling up upon someone wanting to know what the problem is with binary logs truly is this generations version of boomers complaining about the inept next generation without actually teaching them anything.
Ok zoomer.
 
I basically never read most of the logs that are generated anyways. I only ever look into them if there's some problem. I've transfered logs from old systems when I copied my Linux over to a new one, and until rather recently the oldest of these logs still had references to ISA and floppy drive controllers. I could still read them after all these years and figure this out because they were in text, not a binary format that might've changed a dozen times since then. And as you can guess by the age of these logs, text files do not take up a lot of space.

At the point I read a log it might be a real possibility that the problem has arisen through hardware failing, and the logs themselves might be impacted somehow. That's why logging systems probably should not be delicate to begin with. If you really want to know what happened to a system that failed, it might make sense to not log to that system to begin with (as it might become inacessible) but to send the logs to a different system as they are generated. This is easy to do with text, all of *nix userspace is geared towards dealing with text streams. With a text stream, I also need to have no assumptions about the tooling of the receiving system, like I said earlier. Even 40 year old computers can handle (ascii) encoded text. Even in this highly hypothetical scenario, a conversion from utf8 would be trivial and compatible in most scenarios. Hell, nowadays some microcontrollers are more than capable at handling flash storage in all forms and all kinds of network and wireless stacks. You could log text to an ESP32 via bluetooth serial to it's internal flash or an external flash chip it's connected to.

Text is just really versatile. There's little point in complicating this. I can only see disadavantages and the advantages do not jump at me. The reason systemd does this is obviously vendor lock-in. Why should I care about this? Stop being a corporate stooge and instead support paradigms that really are open and not beholden to a single project/developer.
 
Last edited:
People piling up upon someone wanting to know what the problem is with binary logs truly is this generations version of boomers complaining about the inept next generation without actually teaching them anything.
Dude he opened with the most gay and retarded opinion possible, not a question, and got a variety of explanations even in spite of that. I don't know what you feel would have been ideal if not that, seems to me its better to call a nigger a nigger instead of perpetuating an insane delusion.

2024 join dates with anime avatars are the worst combination.
Seriously, its actual brain rot.
 
Why are linux spergs having a dick sucking contest RN?
"Poetering is a nigger and should burn in hell, he isn't FOSS friendly and made GNOME and systemd BTW o algo also something something redhat"
"I can run commands on my machine to read binary files, you can't so you're a cringe noob lazy windowtoddler who's scared of the terminal o algo"

Bitch STFU unironically. Bring some more funny to the thread plz kthx.
 
https://daniel.haxx.se/blog/2024/12/21/dropping-hyper/ - Daniel Stenberg, the benevolent dictator of Curl has DROPPED their HTTP/1 rust backend using Hyper. The reason? Rust-trannies couldn't get into C to utilize libcurl. Even when he "co-operated intensely" with Sean McArthur (The maintainer+lead dev of Hyper) there just wasn't any interest from his own devs to learn Rust nor the wherewithal from Rust-trannies to get to grips with C. https://archive.ph/JQebv

95% of the work is the easy part​

I mean that we took it perhaps 95% of the way and almost the entire test suite ran identically independently of which backend we built curl to use. The final few percent would however turn out to be friction enough to now eventually make us admit defeat, give up and instead yank it all out again.

There simply were no users asking for it and there were almost no developers interested or knowledgeable enough to work on it. libcurl is written in C, hyper is written in rust and there is a C binding glue layer in between. It takes someone who is interested and good at both languages to dig in, understand the architectures, the challenges and the protocols to drive this all the way through.

But with no user demand, why do it?

It seems quite clear that rust users use hyper but few of them want to work on making it work for a C project like curl, and among existing curl users there is virtually no interest in hyper. The overlap in the Venn diagram of the two universes is not big enough.

With no expectation of seeing this work completed in the short to medium length term, the cost of keeping the hyper code is simply deemed too high. We gain code agility and reduce complexity by trimming this off.
 
https://daniel.haxx.se/blog/2024/12/21/dropping-hyper/ - Daniel Stenberg, the benevolent dictator of Curl has DROPPED their HTTP/1 rust backend using Hyper. The reason? Rust-trannies couldn't get into C to utilize libcurl. Even when he "co-operated intensely" with Sean McArthur (The maintainer+lead dev of Hyper) there just wasn't any interest from his own devs to learn Rust nor the wherewithal from Rust-trannies to get to grips with C. https://archive.ph/JQebv
Now that you mention it, I never really see programmer troons use c or c++
 
Damn, this thread moved too quickly. Excuse me for responding to some older posts.

So, all of this IPv6 discussion reminded me of an article I read recently, and made this thread the most appropriate in which to mention it:
https://blog.infected.systems/posts/2024-12-01-no-nat-november/ (archive)

It's No NAT November. Get it guys? It's just like No Nut November. I'll admit that's a clever title, and also very appropriate, since disabling IPv4 is about as meaningful and productive as not masturbating for a month with absolutely no reason behind it. Surprise surprise, it doesn't work. By the second day, he started using NAT64, just hosted by someone else. IPv6 is a joke.
One of the cornerstones of the original Internet Protocol design was to have a flat, global address space.
In order to do that, you need global, reachable* addresses for every device connected to the Internet.
I'm reminded of something David Clark wrote in Designing an Internet. While that was one of the original design assumptions, the creation of NAT proved that it was unnecessary. The Internet is a complex system that assumes very little, and that's one assumption that was later discarded. Any future Internet must take these kinds of hard lessons into account. It's a good book, I recommend it. NAT isn't solely a bad thing, I believe he calls it coerced delivery, when the design of the network segment forces traffic through a certain point, even when the sender would prefer otherwise. I don't have my copy of the book at my side right now, but topological delivery is a similar concept. He creates a whole three-dimensional model to describe this kind of thing.
What is actually wrong with this?
There's nothing wrong with non-textual logs, I refuse to call them binary logs. UNIX in particular does absolutely nothing to distinguish text from any other kind of data. Now, UNIX programmers are supposed to love when programs shit into each other's mouths, forming an inhuman centipede, but this for some reason isn't good enough when it comes to logs. Notice the cries of bit-flips and other things that start cropping up when they need an excuse. It's common sense for data to have a form suited to machines and the ability to convert that data to a form suited to humans, which is necessarily easier than vice versa, but UNIX programmers lack common sense.
 
IPv6 is a joke.
Read the whole thing and found it very interesting. I might implement 464XLAT and see how it goes. My network is already dual stack, but I don't use any of the transitional technologies.

As for your comment, there's nothing in there that's a fundamental issue. It's vendors and services not giving a fuck about IPv6. Though mainly he gets felted by Windows being shit, Linux having half-baked implementations (shocking) and chinky network hardware not giving a shit about IPv6 at all.

The compromise of keeping IPv4 around and letting compatible devices elect to go v6 only through DHCPv4 options is cool and I had no idea that was a thing.

It's a joke that network admins don't take the time to learn how to at least get functioning IPv6 on public facing networks. Literally your job to do this and they simply can't be fucked unless someone orders them to figure it out.
 
As for your comment, there's nothing in there that's a fundamental issue. It's vendors and services not giving a fuck about IPv6.
Yeah, but the Internet is the kind of technology where all of the details matter a hell of a lot less than the fact that everyone agrees on them, which makes apathy a crippling issue.
It's a joke that network admins don't take the time to learn how to at least get functioning IPv6 on public facing networks. Literally your job to do this and they simply can't be fucked unless someone orders them to figure it out.
I agree, but Shame as a Service (archive) fixes nothing. As it stands, I'm halfheartedly interested in making my networking libraries work with IPv6, I think it would be neat to represent the addresses as eight hextets rather than sixteen octets since they're written that way, but it's work for no gain. I had a problem with IPv6 fixed by disabling it, and this is a really common story.
 
Now that you mention it, I never really see programmer troons use c or c++
When being contracted for security audits on codebases, I have found it to be completely the opposite, that the C and C++ codebases have far more troons working on them than rust codebases, it seems for every 5 to 10 C++ codebases I have to interact with one, and most companies that do any kind of "embedded" anything will have one that I have to interact with.
I have audited way more C++ and C codebases than I have rust, so that might be due to sampling bias, but for rust codebases, it has almost always been Chinese expats that I have interacted with, and not the square-headed consent accident kind.

In my opinion, the "rust tranny" meme is almost entirely an online thing, and it appears to be more of a cultural filter than anything else (to filter you out, chud). It clearly works since lots of people hate it purely because of that and not for any real technical reason, or where technical reasons are raised, they are usually done so after either directly complaining about online troonism, or indirectly at "woke" "authoritarians", or in other words, after they have decided they don't like it, post-hoc reasoning. There are still genuine technical criticisms to be had, but they are obscured by the sperging about troons. The online tranny phenomenon is very loud and exists across the entirety of the "open source" software world, it is not unique to rust at all. I'd say there are way more "infosec" troons out there than there are rust troons from what I can see being in both the software and "infosec" (I hate that term) industries.
 
Back