Diseased Open Source Software Community - it's about ethics in Code of Conducts

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
For something like the FSF to have any chance of relevance, I think it needs both a ridiculous ideologue to evangelize, and a charismatic figurehead to get checks signed. Why they don't just make Stallman the High Techno Priest for conferences and talks, and get someone shiny to interface with the business world, is beyond me. Either that board truly loves and values RMS, which they may, or he has some dirt. Or lint.
95 percent of FSF software is either outdated junk or stuff that works well enough. Most of it is clunky as fuck and ancient in design but it works for most people and is still used mostly due to momentum.

GCC, both the compiler and the larger ecosystem, was the primary real world relevance for the FSF. Stallman designed GCC to be a GNU/FSF lock-in for the open source world. Once Clang/LLVM matured, that GNU/FSF lock-in was nullified, the importance of the FSF to the open source world was greatly diminished. Clang/LLVM didn't mean the FSF software was no longer needed - it meant that GCC was no longer irreplaceable.

An almost completely FSF/GNU free Linux system is fairly easy to run today. You can already run a BSD type userland along with the complete Clang/LLVM toolchain and most people wouldn't know the difference other than various GNU specific command line options or the details of how binaries are built.

No one in the open source world is looking to the FSF to anything other than ideological leadership these days. Stallman retaining his control or not is not going to have any practical effect on the open source world outside of bragging rites for the GNU votaries or the nutcase SJWs.
 
Plus its the land of ihan omar and that oddly large somali community. Whats going on in Minnesota!? thanks for the additional context on the school.
Minnesota is known as the preferred destination for the original snow niggers so the Somalians must have gotten confused on their way to Sweden, they're not a very bright people.
 
The Linux kernel is one of the largest software projects in the modern history; with a gigantic 28 millions lines of code.


Contributors from all over the world and from different fields submit a large number of patches each day to the Linux kernel maintainers, so that they get reviewed before being officially merged to the official Linux kernel tree.


These patches could help fix a bug or a minor issue in the kernel, or introduce a new feature.


However, some contributors have been caught today trying to submit patches stealthily containing security vulnerabilities to the Linux kernel, and they were caught by the Linux kernel maintainers.


Researchers from the US University of Minnesota were doing a research paper about the ability to submit patches to open source projects that contain hidden security vulnerabilities in order to scientifically measure the probability of such patches being accepted and merged. Which could make the open source projects vulnerable to various attacks.


They used the Linux kernel as one of their main experiments, due to its well-known reputation and adaptation around the world.


These researchers submitted patches which didn’t seem to completely fix the related issues in the kernel, but also didn’t right away seem to introduce a security vulnerability. A number of these patches they submitted to the kernel were indeed successfully merged to the Linux kernel tree.


However, today, they were caught by Linux kernel maintainers, and were publicly humiliated. In an email by Greg Kroah-Hartman, one of the major Linux kernel maintainers, their approach was disclosed and their so-called “newbie patches” were thrown under the bus:



Apparently, Greg and a number of other maintainers were not happy about this, as these experiments consume their time and efforts and make people engage by bad faith in the Linux kernel development:




Finally, Greg announced that the Linux kernel will ban all contributions from the University of Minnesota, and that all the patches they previously submitted are going to be removed from the kernel, and no new patches will be accepted from them in the future:




The research paper they worked on was published back in February, 2021; around two months ago. In the paper, they disclose their approach and methods that they used to get the vulnerabilities inserted to the Linux kernel and other open source projects.


They also claim that the majority of the vulnerabilities they secretly tried to introduce to various open source projects, were successful in being inserted by around an average of %60:

It is still unclear at this moment what other open source projects they tried to hijack, and what is the actual number of vulnerabilities they succeeded in inserting to various open source projects.


Greg has sent another email in which he reverts most patches from the University of Minnesota from the Linux kernel, and puts some of them on hold.
Alright, give me all of your puzzle pieces, because after reading the paper, I can only conclude that this article was either written by a fucking moron who failed to understand the paper, FAKE NEWS, or pure damage control. In all likelihood, it's all three.

Let's talk about the first issue that really sticks out even if you just skim the paper.
They also claim that the majority of the vulnerabilities they secretly tried to introduce to various open source projects, were successful in being inserted by around an average of %60:
1620341761155.png

This is not successful insertion. This is the catch rate. In other words, this is the percent of vulnerabilities that were caught by maintainers as a percentage of the total. This is clearly explained by the paper
1620342110838.png

Nowhere do they claim that they successfully inserted vulnerabilities 60% of the time. In fact, they do not claim that they inserted any of the vulnerabilities in this data set. They do not state where they obtained the merged and blocked vulnerabilities in this portion of the paper:
1620342297277.png

However, it's clear that they are not the source of the dataset because they state that they could only identify a limited number of "indirect-call" methods, and they couldn't be assed to spend the time finding more because of how fucking complicated it is. If they carefully crafted these exploits they should know how many of the fucking things exist.
1620342471904.png

1620342498283.png

Earlier in the paper they discuss the types of authors who contribute patches and vulnerabilites as well as the types of vulnerabilities and their breakdown:
1620342708026.png

1620342740432.png

I do not know where the authors obtained this data, and that is the greatest fucking sin of this fucking paper. As an outsider, I don't know if Linux maintains a database of merged and blocked patches, but as an outsider I should be able to figure out where this fucking data came from. Likewise, I'm not entirely sure how the baseline values are calculated:
1620343098094.png

Based on this paragraph, I think that they calculated it based on what the researchers were able to personally find through manual review and tool-based analysis alone, while the other categories required fuzzing to identify, but I'm trying to read this as an outsider and I may be wrong.

Now for the second issue, which really reeks of fucking damage control:
These researchers submitted patches which didn’t seem to completely fix the related issues in the kernel, but also didn’t right away seem to introduce a security vulnerability. A number of these patches they submitted to the kernel were indeed successfully merged to the Linux kernel tree.


However, today, they were caught by Linux kernel maintainers, and were publicly humiliated. In an email by Greg Kroah-Hartman, one of the major Linux kernel maintainers, their approach was disclosed and their so-called “newbie patches” were thrown under the bus:

You, and your group, have publicly admitted to sending known-buggy patches to see how the kernel community would react to them, and published a paper based on that work.
Now you submit a new series of obviously-incorrect patches again, so what am I supposed to think of such a thing?

Apparently, Greg and a number of other maintainers were not happy about this, as these experiments consume their time and efforts and make people engage by bad faith in the Linux kernel development:


Our community does not appreciate being experimented on, and being “tested” by submitting known patches that are either do nothing on purpose, or introduce bugs on purpose. If you wish to do work like this, I suggest you find a different community to run your experiments on, you are not welcome here.

Finally, Greg announced that the Linux kernel will ban all contributions from the University of Minnesota, and that all the patches they previously submitted are going to be removed from the kernel, and no new patches will be accepted from them in the future:


Because of this, I will now have to ban all future contributions from your University and rip out your previous contributions, as they were obviously submitted in bad-faith with the intent to cause problems.
First, removing every patch submitted from a University of Minnesota email address sounds like a PR stunt to try to control the damage, mainly because the three patches the authors submitted were submitted with gmail addresses.
1620343977789.png


And none of the three patches were merged to the linux kernel tree, they were changed after the maintainer confirmed them:
1620344452386.png

The entire phrasing just reeks of damage control, maybe the UMN patches are shit, but in my opinion, the big takeaways from the paper are troubling. There are a number of known potential exploits that could become full exploits as a result of a third party accidentally or maliciously inserting code that's missed by the Linux maintainers. And instead of allowing people to submit preventive patches that could fix them before they become full blown exploits, they refuse to do so until it becomes a problem. It's a very exploitable situation and if someone hasn't exploited it already, it's because there is no fucking money in Linux exploits because it's only used by a subset of greasy neckbeards.
 

Attachments

  • 1620344039253.png
    1620344039253.png
    18 KB · Views: 58
  • 1620344061456.png
    1620344061456.png
    32.3 KB · Views: 59
  • 1620344116222.png
    1620344116222.png
    33.6 KB · Views: 75
I do not know where the authors obtained this data, and that is the greatest fucking sin of this fucking paper. As an outsider, I don't know if Linux maintains a database of merged and blocked patches, but as an outsider I should be able to figure out where this fucking data came from.
Good catch. Others probably pointed out the lack of references to data to the lead author too, as he's since published a full disclosure for the case study at least (why he didn't just include everything as a supplementary from the start is baffling and bad practice).
 
For desktops, sure. For servers, mobile devices, embedded systems, and literally fucking everything else? There's Linux in them thar hills.
But do they get these kernel updates? I would imagine that the servers do, but the embedded systems?

Edit: Now, if only the SmartBulb bitcoin mine was real.
 
Last edited:
Someone totally not affiliated with the Russian government wants to stick Yandex and Google analytics into Audacity

Basic telemetry for the Audacity
This request provides the basic telemetry for Audacity.

To implement the network layer libcurl is used to avoid issues with the built-in networking of the wxWidgets.

Universal Google Analytics is used to track the following events:

  • Session start and end
  • Errors, including errors from the sqlite3 engine, as we need to debug corruption issues reported on the Audacity forum
  • Usage of effects, sound generators, analysis tools, so we can prioritize future improvements.
  • Usage of file formats for import and export
  • OS and Audacity versions
To identify sessions we use a UUID, which is generated and stored on the client machine.

We use Yandex Metrica to be able to correctly estimate the daily active users correctly. We have to use the second service as Google Analytics is known to have some really tight quotas.

Both services also record the IP the request is coming from.

Telemetry collection is optional and configurable at any time. In case of data sharing is disabled - all calls to the telemetry Report* functions are no-op.

Additionally, this pull request comes with a set of libraries to help the future efforts on Audacity.
1620376320500.png
 
That's a pull request, not an announcement that they're implementing it.

If these reaction stickers are any indication, I think you can safely stick with Audacity for the time being.
Yeah. Audacity is a widely-enough used tool, and this is an unpopular-enough decision, that I'm willing to bet that if this PR got merged in, there'd be a widely-supported fork of Audacity without that merge within a day or two.

Here's a separate thread about this controversy.

Edit: Just to throw in my two cents, I usually don't have a big problem with telemetry and as a developer I can definitely see the value in getting a good understanding of how people are using the program, since end users often use a program quite differently from how someone who developed it used it (which probably explains why Audacity's UI is like one of those hidden image posters). And from what I've seen, collecting the data is completely opt-in, so if an end user didn't want the program to do this, they could just click "no" and it wouldn't happen. I suspect the problem most people have is that they took shortcuts by relying on Google and Yandex services to collect the data, and if they had gone with a self-hosted and self-developed or FOSS system instead, there would have been at least slightly less push-back.
 
Last edited:
Yeah but there's a lot more money in exploits for personal computers than there is for servers and embedded systems.
No, there's LOADS of money in exploits for servers. Hacks someone's PC, and you potentially have one person's personal info to sell on the black market. Hack a server and you potentially have thousands. You just don't hear about it as much both because of the differences in OS, and because companies tend to do their best to keep everything locked down, but your average normie's approach to security is a lot less rigorous, if they think of it at all.
 
Yeah but there's a lot more money in exploits for personal computers than there is for servers and embedded systems.
No, there's LOADS of money in exploits for servers. Hacks someone's PC, and you potentially have one person's personal info to sell on the black market. Hack a server and you potentially have thousands. You just don't hear about it as much both because of the differences in OS, and because companies tend to do their best to keep everything locked down, but your average normie's approach to security is a lot less rigorous, if they think of it at all.
You're both right/wrong. PCs are the low hanging fruti because every user has to be their own security guy, which means that security on most PCs is equivalent to a bunch of broken windows and an ajar front door with a hanging "please knock!" sign. And most people knock because they're fundamentally good and they've yet to learn that not knocking is a thing you can do and besides, the couch probably isn't worth the effort to steal anyway. Servers have actual security because they have real assets, liability, and above all enough concern to hire someone to do security. At the very least, even the smallest biz will pay the extra $5/mo to their host to keep nginx or whatever up to date for them. So it's a lot more difficult to compromise enterprise servers, but it's potentially a lot more lucrative. As the attacker, depending on your risk appetite and competence, either one could have the "more money".
 
You're both right/wrong. PCs are the low hanging fruti because every user has to be their own security guy, which means that security on most PCs is equivalent to a bunch of broken windows and an ajar front door with a hanging "please knock!" sign. And most people knock because they're fundamentally good and they've yet to learn that not knocking is a thing you can do and besides, the couch probably isn't worth the effort to steal anyway. Servers have actual security because they have real assets, liability, and above all enough concern to hire someone to do security. At the very least, even the smallest biz will pay the extra $5/mo to their host to keep nginx or whatever up to date for them. So it's a lot more difficult to compromise enterprise servers, but it's potentially a lot more lucrative. As the attacker, depending on your risk appetite and competence, either one could have the "more money".
So basically, it's roughly equivalent to the difference between a quick-and-dirty smash-and-grab, and a full-on organized bank robbery.
 
That's a pull request, not an announcement that they're implementing it.

If these reaction stickers are any indication, I think you can safely stick with Audacity for the time being.
Audacity got bought out, the users don't have any say. Best/worst case they'll close the PR after too much pressure, and implement it anyway behind a closed-source license in a couple months.
 
Back