Open Source Software Community - it's about ethics in Code of Conducts

Oh yeah, you support "Open Source" and "Freedom"? :smug: I can CLEARLY see you using *checks notes*, ImageMagick...and Linux, and...curl.
The web browser section is fucking hilarious. "Well you see, now you cant use any modern web browser except gnome web because AI." I don't even think gnome web can view youtube videos correctly.

But yeah, this is a good list, AI being used in big projects when its in this legal gray area is dumb and is only going to lead to a disaster if the law ever catches up.
 
Total copyright death
The issue is not copyright, its the exact opposite. Companies take open source projects, which would freely allow companies to work and use the project for free as long as they contributed back if they did anything with it, go "SAAR CLAUDE PLEASE MAKE IT AGAIN" and resell it under their own licenses, most likely proprietary, completely undermining the work of the community and not contributing back to the project.
 
The issue is not copyright, its the exact opposite. Companies take open source projects, which would freely allow companies to work and use the project for free as long as they contributed back if they did anything with it, go "SAAR CLAUDE PLEASE MAKE IT AGAIN" and resell it under their own licenses, most likely proprietary, completely undermining the work of the community and not contributing back to the project.
Or, you Ghidra Windows 7 and feed the output into an AI to generate your own compatible knockoff.
 

Oh yeah, you support "Open Source" and "Freedom"? :smug: I can CLEARLY see you using *checks notes*, ImageMagick...and Linux, and...curl.
Wow the first fucking line we're off to a great start
1776116060893.png


This document reeks of sperging. The only kind of good reason I see on the list is the risk of bad code quality. If you checked it and you know it works then who gives a shit? Also their criteria for each thing is so vague that sooner or later everything will be on here.

This is basically like the guy in the tiktok comments freaking the fuck out because um ackshually that bottle of water you're drinking is supporting genocide in Bajookieland so you're a bad person for drinking it.
 
And then Microsoft can't AI-wash code anymore because of the precedent they've just set.
These are not remotely equal. "Either you've got to let everyone take from the cookie jar, or you've got to keep to your own cookie jar." If they were Redhat, this might be a tough decision, but Microsoft has the biggest cookie jar in the world.
 
The issue is not copyright, its the exact opposite. Companies take open source projects, which would freely allow companies to work and use the project for free as long as they contributed back if they did anything with it, go "SAAR CLAUDE PLEASE MAKE IT AGAIN" and resell it under their own licenses, most likely proprietary, completely undermining the work of the community and not contributing back to the project.
They already could get away with it if they did it with half a brain.
 
Rare troonoid W, AI use should absolutely be publicly known so people can version pin and code review. Not that 99% of people care, but for those of us that do, big ups.
You've been infected by the techtranny mind virus if you actually believe that using AI in a project means that internal quality control and testing no longer exists, I'm afraid. "Generate, commit and push to production in one prompt" is one jeet-preferred approach, not the only approach that exists. You can ask AI to generate code atomically, review every change manually, and deny changes that aren't up to the project standard. As for the tranny's claim that you can "check for agent files and contributors", it's trivially easy to keep AGENTS.md or CLAUDE.md out of the public repository (just leave them untracked) and to not have an "AI Agent" account on the team by using a non-integrated agent interface.

Besides, if your project is that serious, you should already be version pinning and code reviewing dependencies regardless of whether the dependency uses AI or not.
 
You've been infected by the techtranny mind virus if you actually believe that using AI in a project means that internal quality control and testing no longer exists, I'm afraid. "Generate, commit and push to production in one prompt" is one jeet-preferred approach, not the only approach that exists. You can ask AI to generate code atomically, review every change manually, and deny changes that aren't up to the project standard. As for the tranny's claim that you can "check for agent files and contributors", it's trivially easy to keep AGENTS.md or CLAUDE.md out of the public repository (just leave them untracked) and to not have an "AI Agent" account on the team by using a non-integrated agent interface.

Besides, if your project is that serious, you should already be version pinning and code reviewing dependencies regardless of whether the dependency uses AI or not.
Its more of a knee-jerk reaction on my part given the sheer quality of slop that gets produced nowadays. Even "serious" maintainers a la Microslop can let their projects languish if they get too comfortable using AI. Despite using it myself and having seen that it really can produce good, functional code, I'm still very distrustful of any project that directly references AI use. Call it a bad gut feeling. Like seeing jeets out on the street. They're there and there's not much I can do about it, but I'll still turn my nose up and try to hold my breath. And yes, I do version pin most stuff until it stops working, which is usually never (sans soydev programs like Wayland and company).
 
Arguing about tools is silly. Retards with AI will do retarded shit, good devs with AI will do good shit.

Basically every company at this point has devs use AI in one way or another, so don't use anything anymore, ever. Build your own software

>AI is in PRs/commits/REVIEWS

Doesn't matter. Is there anything actually BAD in there the devs let through? Was anything affected? Right now it's all just doomsday theories of possible issues that will appear

And I semi-agree, but looking at a list that has fucking trans flags and rsync on it and thinking "yup good that someone made this FINALLY I can stay away from rsync" is silly

Look at Torvalds for example, he said he doesn't code much nowadays and mainly reviews code. If he does that job well reviewing humans why can't he review AI code just as well?
 
How DARE those nazis use HER open source software!? Don't they know that valid TRANSFOLX like herself can dictate who uses their open source software and how at any given time?
This is why trannies love nigger cuck licenses like MIT btw. They hate the GPL and in general hate what FOSS stands for. They want to be able to pull the plug at any time if people use their opensource projects in a way they don't like. That's the extent of their "passion" for open source; the miniscule power in being able to take something away from someone.

Worth noting that Libreboot is GPL-3.0 (a decision I assume this author regrets) so all of this retarded flailing is for nothing. Omarchy doesn't have to do shit.
 
Look at Torvalds for example, he said he doesn't code much nowadays and mainly reviews code. If he does that job well reviewing humans why can't he review AI code just as well?
One of the central conceits of open source is that random anonymous Internet people can just chuck contributions at maintainers and this will in some way be more efficient than the maintainer writing it himself. In my experience, this mostly only holds for simple or small changes. For a complicated change, it may be a decent starting point, and may include useful documentation about the problem and its solution, but any sort of thorough review is still going to require some percentage of the amount of time it took to write it. That is, no matter how many submitters you have, throughput still scales linearly with the throughput of the reviewers.

Reviewers inherently have to be trusted, competent, and vigilant, so they are naturally a very limited resource. In practice, the efficiency of a project like the Linux kernel comes from a developed network of relationships between maintainers and reputable contributors. This may be one reason (the other being copyright concerns) that Linux does not allow anonymous contributions.

All that to say that, hard as the maintainers may try, there is no way they can keep up with the existing contribution volume while inspecting every contribution with intense scrutiny.

Accepting contributions using LLMs burns the candle from both ends: increased contribution volume of decreased trustworthiness. Perhaps they can get pickier about which contributions they look at and accept a longer backlog, but that will likely be a learning process in and of itself.

It's also worth noting that writing and reading are fundamentally different processes that involve subtly-different ways of thinking (think of the difference between reading from a textbook and completing exercises at the end of the chapter). From personal experience, some errors are much easier to catch while writing than when reading. If an LLM is doing the writing, then all we're getting is two instances of the "reading" thought process, and none of the writing thought process.

There's also the issue that the computer madness machine can fail in novel and exciting ways. I don't think that art teachers from 2010 were on the lookout for people with three hands or six fingers in the subtle ways that image generators produce. A great example of this in software is "slopsquatting", in which an LLM will generate code that uses packages that may or may not exist, that may or may not be controlled by a malicious entity. One could argue that, from a zero-trust perspective, this learning experience is actually a net positive, but I cannot help but be apprehensive about the thought that my kernel's developers may soon be discussing their "learning experience" in ensuring software quality.

Not to worry though, the year of Hurd is fast approaching.
 
One of the central conceits of open source is that random anonymous Internet people can just chuck contributions at maintainers and this will in some way be more efficient than the maintainer writing it himself. In my experience, this mostly only holds for simple or small changes.
That's the trick: if your project is significant, don't allow random anonymous Internet people to chuck contributions at maintainers. SQL doesn't, the Linux kernel doesn't. You need to have existing maintainers vouch for you and to receive manual approval from Linus himself to contribute to the Linux kernel. After you're in, you can use all the AI you want because you have proven that you are capable of submitting non-slop code. Good luck getting in though!
 
That's the trick: if your project is significant, don't allow random anonymous Internet people to chuck contributions at maintainers. SQL doesn't, the Linux kernel doesn't. You need to have existing maintainers vouch for you and to receive manual approval from Linus himself to contribute to the Linux kernel. After you're in, you can use all the AI you want because you have proven that you are capable of submitting non-slop code. Good luck getting in though!
Yeah pretty much all of those concerns are solved by having trusted people be allowed to contribute. Smaller projects would love any contribution they can get as those barely get any in the first place, but if you're a large project and have so many contributors you could make a small country just triage them

You want to contribute? Do some tedious shit to prove you really mean it and don't just want to throw some easy AI written code. I don't know, do support on discord or something. Or have prerequisites that you already are a contributor for a certain amount of projects, or just hand pick whoever seems like not a retard.
 
That's the trick: if your project is significant, don't allow random anonymous Internet people to chuck contributions at maintainers. SQL doesn't, the Linux kernel doesn't. You need to have existing maintainers vouch for you and to receive manual approval from Linus himself to contribute to the Linux kernel. After you're in, you can use all the AI you want because you have proven that you are capable of submitting non-slop code. Good luck getting in though!
I bet you could train an LLM to spam low quality shit commits because even if it sucked at coding, it could social engineer autists into approving its "contributions."
 
Speaking of AI, and backtracking to the "rebuilding technology after Ragnarok" side-quest from a few pages ago, I see Dave Plumber has shoved AI onto a PDP-11.
No word yet on if it can run Crysis.
 
Speaking of AI, and backtracking to the "rebuilding technology after Ragnarok" side-quest from a few pages ago, I see Dave Plumber has shoved AI onto a PDP-11.
No word yet on if it can run Crysis.
I assume its run speed can be measured in minutes per token instead of tokens per minute?
 
Back
Top Bottom