Diseased Open Source Software Community - it's about ethics in Code of Conducts

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Two words: binary logs. Burn in Hell Poettering.
What is actually wrong with this? As long as you can export them as text, binary logs allow you to save a lot of space and easily filter/color them during display. I use them in my own software for that reason.
Is there a reason
Code:
journalctl [options] > file
doesn't work for you?
 
What is actually wrong with this? As long as you can export them as text, binary logs allow you to save a lot of space and easily filter/color them during display. I use them in my own software for that reason.
Is there a reason
Code:
journalctl [options] > file
doesn't work for you?
Yeah, because that is mind bendingly retarded.

You expect me to believe in a world where games waste tens and hundreds of gigabytes that I give half of a rip about binary log size vs text log size? What are the savings, 10%? Text is already quite condensed, plus it mostly stores text anyways. The best he is doing is avoiding a formatted date string. What if his rattling nigger rigged pile of crap corrupts the log and nothing else can read it?
 
You expect me to believe in a world where games waste tens and hundreds of gigabytes that I give half of a rip about binary log size vs text log size?
Yes? Why should the logs be bloated? "Because shitty video games are bloated" is a bad argument.
What are the savings, 10%? Text is already quite condensed, plus it mostly stores text anyways.
Given that its also compressed (never checked for systemd, but I assume they are) than the savings are probably >80%, actual size varying by log content. Most logs repeat themselves for the most part, allowing compression to do a lot.
The best he is doing is avoiding a formatted date string
That's a nontrivial amount of space, probably 20-30% of your log file without compression. With compression that number is probably not more than a couple percent. Given the dates are also encoded as something other than a string, probably a very small fraction of the log.
What if his rattling nigger rigged pile of crap corrupts the log and nothing else can read it?
If compressed, there's no difference. Ideally your filesystem doesn't corrupt it.

Personally I've never seen the journal break, nor had any issue reading out information from it. Has this actually happened to you in a case that wasn't a shitty hard drive with no backup failing?
 
Given that its also compressed
Well zipping is completely standardized and quite common in this situation so bit of a disingenuous faggot argument

That's a nontrivial amount of space, probably 20-30%
You think date strings would be 20-30% of the logs?

Has this actually happened to you?
"Has it broken for you personally? No?? THEN THAT MEANS ITS GOOD AND YOU HAVE TO USE IT"

Severely anime brained
 
Well zipping is completely standardized and quite common in this situation so bit of a disingenuous faggot argument
And in this case your filesystem corruption argument doesn't work either.
You think date strings would be 20-30% of the logs?
Of raw text logs formatted the same as the journalctl default? Most likely. Of compressed logs, far less, as I mentioned.
"Has it broken for you personally? No?? THEN THAT MEANS ITS GOOD AND YOU HAVE TO USE IT"
I never said that. I'm only asking if you have actually had bad experiences with it, or if you have any reason to dislike the journal other than "systemd le bad".
Severely anime brained
Not an argument.
 
And in this case your filesystem corruption argument doesn't work either.
This is both disingenuous and retarded. What I originally said plainly reads as the tool corrupting the log and not being able to decode it (due to it being based on poettering's crappy homemade binarization), not it magically somehow being corrupted on its own.
Not an argument.
Its more of an observation that you are probably mentally crippled
 
See, this is both disingenuous and retarded, because what I originally said plainly reads as the tool corrupting the log and not being able to decode it (due to it being based on poettering's crappy homemade binarization), not it magically somehow being corrupted on its own.
So your only argument is "maybe the tool that has been proven to work doesn't work" because you don't like poettering? Disliking him is fine but this is a dumb argument.
Its more of an observation that you are probably mentally crippled.
I'm observing you lack a good argument.
 
So your only argument is "maybe the tool that has been proven to work doesn't work" because you don't like poettering? Disliking him is fine but this is a dumb argument.
Your failure to parse my statements has entered the territory of dishonesty rendering further conversation pointless so I shall leave you with a soyjak

you:
american flag angry soyjak clothes crying flag_united_states glasses microphone open_mo.. angr...png

"YEAH WELL... MY MADE UP VERSION OF WHAT YOU SAID IS STUPID! NO, DONT GO! IM RIGHT!!!"
 
binary logs are bad because they lock you in. If a program uses binary anything, you need tools of that program suite to process it. Because we are talking abouy systemd and redhats/ibms lock-in fetishism, that is not surprising. A text log can be interepreted by anything with no additional tools. vim, emacs, Turbotext on my Amiga 600, it doesn't matter. No additional tool required. No systemd installation required. No customly written program by me after a spec that might not be the same next month required. Text is also more resilient to corruption and bit flips. If a few bits in a text file are flipped, you can still read and understand the text. If it happens to a binary file, the program interpreting it might not be able to interpret it correctly anymore. And now stop bickering like a bunch of 4channers.
 
Your failure to parse my statements has entered the territory of dishonesty rendering further conversation pointless so I shall leave you with a soyjak

you:
View attachment 6779005
"YEAH WELL... MY MADE UP VERSION OF WHAT YOU SAID IS STUPID! NO, DONT GO! IM RIGHT!!!"
Continue coping without an argument.
binary logs are bad because they lock you in. If a program uses binary anything, you need tools of that program to process the logs. Because we are talking abouy systemd and redhats/ibms lock-in fetishism, that is not surprising. A text log can be interepreted by anything with no additional tools. vim, emacs, Turbotext on my Amiga 600, it doesn't matter. No additional tool required. No systemd installation required. No customly written program by me after a spec that might not be the same next month required. Text is also more resilient to curruption and bit flips. If a few bits in a text file are flipped, you can still read and understand the text. If it happens to a binary file, the program interpreting it might not be able to interpret it correctly anymore. And now stop bickering like a bunch of 4channers.
Look! A reasonable argument!
- I want to read logs on a system without systemd
- Its logging spec isn't standardized and I want to parse it with some other tool
Those are reasonable enough. While not an issue for me they may be for you, though in most cases I would just read them out with journalctl > textfile.

As for bit flips, as far as I am aware, any form of compression will be less resistant to bitflips than raw text. Fair enough when comparing plain text logs to binary ones though.
 
Binary logs wouldn’t be an issue if Unix had a sensible format to pass data around instead of text files, in fact if that had been the case it would have been preferable as it would have been consistent. However, since that's not the case, you really don’t gain much from it at the expense of needing to tard wrangle the logs in special manner. Things like compression and faster lookup could have been solved through VFS that compresses data mounted on /var/log and some service that indexes content running in cron respectively while keeping log files transparent.
 
journalctl > textfile.
Yes and then you use grep to identify the software and hope there's no output in anything that's similar to what you're searching for, or you just journalctl for the specific service in separate text files and oh no you have just regular rsyslog or whatever else logs to files.

Or, you know, just use rsyslog directly and save yourself the hassle?

This is why people use docker

Separation of concerns is great because if it's not just one monolithic project there's a higher chance actually useful and logical features will make it in. Journalctl should just have a normal text file logging that's on by default, and an opt-in feature for their retarded binary logging.
 
If you need to browse your log so often that a text file is not enough, then you should be logging to a database. If your text logs take up too much space, delete them, or if you really need to keep them, compress them. Plain text compresses pretty well by default (~6 bits per byte entropy) and you can use any compressor to do it, for example gzip or xz. Gzip is a good choice because there already is a lot of standard tools for processing gzipped text, like gzcat, gzgrep and so on, and if you dont have a gz-version of your tool you an just pipe the gunzip output to a regular tool. And a .gz file is a standard file, the compressor is so ubiquitous you can open it on almost anything.

I dislike the logging of SystemD already, and I do not wish to deal with a custom format on top of it, which is specific only to SystemD and which might break later because they seem to make up development decisions based on the moon phase.

And I already have half of the shit in journalctl and the other in /var/log, because journalctl has the systemd logs and /var/log with /var/log/syslog in the first place, has the logs of the software that I am interested in.

Most logs are text files (I am not couting Windows systems here), there is really no reason to process them as something else.
 
Back