Diseased Open Source Software Community - it's about ethics in Code of Conducts

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
As for sudo-rs, apparently it has 139 unsafe codeblocks and is untested
I got curious because it seemed weird they'd have no tests at all. "Untested" is a very odd phrase to hear around software, since anything of semi-consequence is usually tested in SOME way.
Well, it looks like there are at least compliance tests to make sure it behaves like sudo:

As far as function-level tests or any other kind of test automation, as far as I can see it doesn't have any. I guess they could be secretly fuzzing it behind the scenes but there's no evidence of that.

Unsurprisingly, sudo does seem to have some tests: https://github.com/sudo-project/sud...573f24e304cd/lib/util/regress/glob/globtest.c

I don't really have a horse in this race, if Ubuntu wants to switch to something dangerous they can dig their own grave.

I have mixed feelings on this one. You can't battle-test new software without using it in the field. Whomever at Canonical decides what goes in and out is apparently happy with this one. I don't think I'm so afraid of sudo that I'd run away screaming, because it seems to work fine. It's had a few CVEs over the years but not more than any other random system utility that involves elevation, access control and all that other fun stuff.

I would guess sudo-rs is just landing because whoever in charge thinks that no matter how many hours you use a C program, or how many billions of times it's done its job, it can't be trusted because it's "unsafe". From their perspective having 139 unsafe blocks is less than the "100%" unsafety of C. My guess is one or more Rust zealots is over at Canonical and they are pushing this in there just due to it's Rusty nature.

I think for me it wouldn't be enough of a risk/reward thing to swap out sudo. I'm not trying to stamp out all the C programs in my life. Since Canonical doesn't really care about the opinions of their userbase this is just the latest of many odd moves they have made and will continue to make.
 
Neither does the new rust implementation, by the way, and it's marked a wontfix.
Soulless garbage.
Honestly? For over a year now this claim has actually become a bigger red flag to me in hiring than vibe coding stuff. It's nearly always a signifier for insecurity, and people who will waste company money doing things the hard way / need to constantly be reminded that time equals money and economic factors matter.
Not referring to you specifically and I understand the crimes junior devs commit using AI is disgusting, but there are some real productivity boosts to be gained.
I got into a verbal squabble with a colleague on Friday over this "To LLM, or not to LLM" argument. Basically I'm the person you've described, an insecure PowerShell perfectionist who will sit there buffing his scripts to a mirror finish on the company's dime, formatting everything prim and proper. No unacceptable verbs, efficiency above convenience, and adding things which seem superfluous on the surface to lesser minds. (Like WhatIf support)

My colleagues are not like this at all. None of them have any real desire to learn how to script or program nor derive pleasure from it. They just figure things out by necessity and usually by taking prior work and bruteforcing it into doing what they want. Previously they were limited to what they could dig up online or internally, but Copilot has massively broadened their horizons given it can conjure scripts out of thin air.

What set off the fight is a big change to our org structure which is going to need a lot of Active Directory changes. Conveniently I wrote a script last year that does a lot of the needed work (management loves pointless restructures). I gave the script to the chieftain of the Copilot tribe and he took one look, said he doesn't understand it and asked Copilot to produce a replacement.

He provides me the Copilot replacement and it's disgusting. Everything about it makes me cringe and it's downright unreadable, though he claims the opposite, but it's likely because it's legible to Copilot which means he can ask it to make alterations. It perturbed me, but I'm not going to stop them executing slop so long as they take responsibility when it fucks up and keep my name out of it.

The fight occurred when management wanted me and the Copilot whisperer to come together and sing Kumbaya. I offered to simplify the logic of the old script, but he downright refuses to use something not LLM produced and wanted me to cosign whatever Copilot happened to spit out to appease management. Needless to say that didn't go down well and the call ended on a sour note.

Circling back to your post, I don't care that Copilot can make seemingly functionally equivalent code in 2 seconds where it takes me 2 hours. I don't care that to a middle manager like you who traded his soul for an MBA, it "wastes company money" when I take pride in my work. It's not an assembly line, programming is a form of creativity, and while LLMs do an okay job chewing through common Active Directory tasks, for anything slightly more complicated they shit their pants and hallucinate.

An example of this is involving interacting with APIs for vendor products, Copilot literally just makes up REST endpoints that sound like what you want since it has very shallow knowledge. Given the Copilot tribe doesn't understand how this magic box works, they assume it can work with anything and get led around for sometimes days at a time until they finally come begging for help.

This isn't to say LLMs don't have a place, they're sometimes good for bouncing ideas or asking for suggestions and alternatives, but they're confident liars and you have to double check everything. If you're hiring people who have outsourced their intellect to LLMs, you're hiring morons and those productivity gains will go up in smoke in due course.

The last year I've wasted so many hours of my life correcting ridiculous mistakes, false assumptions, and straight up lies that I eventually find out originated from people having blind faith in LLMs and not double checking anything. That initial productivity gain of having an immediate answer ends up potentially causing big issues later and further demoralizing the few people who give a shit anymore.
 
Your lengthy post is the kind of insecure whining I was talking about. Retards are using LLMs to shoot themselves in the foot more efficiently? Let them have at it, this will work itself out soon enough.
But also: If you need to write a wall of text on how you're superior to everyone else, you're probably not.

In your story, the problem is not the LLM, but something about you and your company.
You made this amazing tool that solved a big problem for your team, it's served to them on a silver platter... and the call "ended on a sour note"? Nobody is running around looking for extra work, not even with ChatGPT.
There's something you're not telling us here, and that thing is definitely not some variant of "I'm just so much smarter than all my coworkers". Work on your social skills, homeboy.
 
Basically I'm the person you've described, an insecure PowerShell perfectionist who will sit there buffing his scripts to a mirror finish on the company's dime, formatting everything prim and proper.
The other side of the coin here is your coworkers are busily delivering scripts that work while you pointlessly refine scripts where the quality isn't that important.
he downright refuses to use something not LLM produced
How did this fellow do any work prior to a few years ago? This is a weird attitude to only use things produced by LLM. I can't imagine saying "I'll only use programs that the author used autocomplete"... This is weird.
Circling back to your post, I don't care that Copilot can make seemingly functionally equivalent code in 2 seconds where it takes me 2 hours. I don't care that to a middle manager like you who traded his soul for an MBA, it "wastes company money" when I take pride in my work.
You might want to consider whether you are in the right kind of job. You're making it sound like they just need some fools to shuffle around things in Active Directory. You working for 8 hours on The Perfect Script may very well be a waste of the company money. If good enough is good enough, perfect is a waste.
If you're hiring people who have outsourced their intellect to LLMs, you're hiring morons and those productivity gains will go up in smoke in due course.
If you truly do outproduce the other fellow on the team it stands to reason over time you'll win and he will lose. However, if he delivers an adequate script in 2 hours and you deliver a "perfect" one after 8, your manager may not prefer your approach. Your mileage may vary!
 
Your lengthy post is the kind of insecure whining I was talking about.
>I don't think LLMs are useful for programming
>No, dude, they're super useful *proceeds to not read docs and generate 10,000 lines of unreadable Indian-tier code*
>Yeah, see, you're going to look productive to people, but you actually don't understand what you're doing and your code doesn't work.
>I'm not insecure! Y-you're insecure!!!


Many such cases.
 
>I don't think LLMs are useful for programming
>No, dude, they're super useful *proceeds to not read docs and generate 10,000 lines of unreadable Indian-tier code*
>Yeah, see, you're going to look productive to people, but you actually don't understand what you're doing and your code doesn't work.
>I'm not insecure! Y-you're insecure!!!


Many such cases.
Reddit tier discussion.
> "there can be valid uses for LLMs"
> "you want to replace every single dev with pajeets copypasting chatgpt slop"


Please let me amend: whiny, insecure, and also intellectually dishonest.
 
You are terribly naive.
No, I'm just lucky enough to be in a results-oriented environment at the moment. I've been in other environments that optimize for things other than results and it is not my favorite. A few years ago I'd say if you hate your job leave it. Now, with things as they are, people are lucky to even have a job in the field at all now.
 
But also: If you need to write a wall of text on how you're superior to everyone else, you're probably not.
Post was already long enough so I chose not to mention the other scripting guy we used to have. He was an absolute God and it was because you could lock him in a room for a day and he'd come back with a hot serving of an elegant solution for any problem. That guy's work ethic was unreal and I could never compare, his leaving was an immense loss. Among retards anybody can look good.
You made this amazing tool that solved a big problem for your team, it's served to them on a silver platter... and the call "ended on a sour note"? Nobody is running around looking for extra work, not even with ChatGPT.
It's mostly the solution but not the entire solution. They would rather a wholly new construct written for them than tweaking something existing. I should note Copilot in Microsoft 365 logs your interactions and (like many orgs) we have a policy that you can't put proprietary information into an LLM. Copilot written scripts are in an odd gray area whereas internal scripts are viewed as proprietary.

They're more willing to operate in the gray area than modify a few lines of PowerShell. It ended on a sour note because I will not lie to management and cosign shit I didn't write nor review since there's nobody to ask "why are you doing xyz on line 39?". Don't underestimate how unwilling to learn some people can be.
There's something you're not telling us here
I can't share every detail as then you'd know where I work. I will say there's weird pay disparities, lacking technical leadership due to many good people leaving, and no reprimand for poor performance. It's not all about LLMs but it's really the thing that's driving a wedge between me and people I did once respect.
The other side of the coin here is your coworkers are busily delivering scripts that work while you pointlessly refine scripts where the quality isn't that important.
For sure. So long as the problems remain simple enough, the productivity looks amazing.
How did this fellow do any work prior to a few years ago?
By hand if he couldn't find something on Google that would do exactly what was needed, or a copy and paste recipe left behind by previous admins.
You working for 8 hours on The Perfect Script may very well be a waste of the company money.
On the surface it is a total waste of their money. Like why use nested functions for the org chart mapping script? Because it was interesting and I wanted to do it. Why shave off a few 100 ms on monitoring script runtimes by precompiling blobs of C# embedded in them? Becuse I wanted to experiment with that concept.

So you may ask why do they keep me around? Because of the skills I acquired while wasting company time.
If you truly do outproduce the other fellow on the team it stands to reason over time you'll win and he will lose. However, if he delivers an adequate script in 2 hours and you deliver a "perfect" one after 8, your manager may not prefer your approach. Your mileage may vary!
As complexity grows the time to deliverable can suddenly go from 2 hours for the adequate option to never because that person doesn't have sufficient skills to produce it and the LLM has no idea how to satisfy their request. If management comes to rely on these adequate solutions they better have a fallback plan for when the LLM fails to perform.
 
Your lengthy post is the kind of insecure whining I was talking about. Retards are using LLMs to shoot themselves in the foot more efficiently? Let them have at it, this will work itself out soon enough.
But also: If you need to write a wall of text on how you're superior to everyone else, you're probably not.

In your story, the problem is not the LLM, but something about you and your company.
You made this amazing tool that solved a big problem for your team, it's served to them on a silver platter... and the call "ended on a sour note"? Nobody is running around looking for extra work, not even with ChatGPT.
There's something you're not telling us here, and that thing is definitely not some variant of "I'm just so much smarter than all my coworkers". Work on your social skills, homeboy.
How can you have a green joindate while writing like such a fucking Predditor? You're getting assmad at someone for going into the necessary level of detail to discuss a complex subject.
 
You might want to consider whether you are in the right kind of job. You're making it sound like they just need some fools to shuffle around things in Active Directory. You working for 8 hours on The Perfect Script may very well be a waste of the company money.
He already HAD a script that did what was needed. Mr. LLM decided that wheel must be reinvented with the Magic of LLM. I would think reinventing the wheel was the waste of time, myself, but you seem to feel differently. Design by committee is enough of a pain in the ass, lets add in an LLM! That will most certainly make the process smoother!

I simply can't imagine why this makes me think of the Golgafrinchans...
 
If management comes to rely on these adequate solutions they better have a fallback plan for when the LLM fails to perform.
It's me, I'm the fallback plan. Oddly, LLM failure is the same as Pajeet failure. Give it to Dave the day before the customer demo and make him fix it. And it's never my part of the project that I end up fixing but something I've never touched or even looked at.
 
It's me, I'm the fallback plan. Oddly, LLM failure is the same as Pajeet failure. Give it to Dave the day before the customer demo and make him fix it. And it's never my part of the project that I end up fixing but something I've never touched or even looked at.
The person who can maintain fix shit when the wifi goes out is much more valuable to the company than the person who needs to consult The Machine Overlords before he does anything.

We're counting on you and your kind, @DavidS877, and will continue to do so when the competency crisis starts cascading...

... And that's when you give your demands to the management like a supervillain holding the world ransom and mogging the U.N. on the Telescreen.

Savor that moment, if and when it comes. You'll have earned it.
 
How can you have a green joindate while writing like such a fucking Predditor? You're getting assmad at someone for going into the necessary level of detail to discuss a complex subject.
I'm not mad, the story is just silly. All these underappreciated supergeniuses in here, working on such a high level of productivity, nobody else in their company can even understand their work...

Even if you removed all mentions of AI from that story, it would still not be a good look: Senior dev has an allegedly perfect solution, but manager insists on having someone else rewrite it, costing time and money.
Said senior dev is either suffering from an utter lack of awareness, or desperately clinging to a horrible job at a horrible company.
 
Even if you removed all mentions of AI from that story, it would still not be a good look: Senior dev has an allegedly perfect solution, but manager insists on having someone else rewrite it, costing time and money.
I don't get why does this puzzle you, stuff like this happens all the time, especially in big corpos. Rejecting a working solution in favor of developing a new one that does the same thing is nothing compared to myriad of wasteful bullshit that goes on in some companies.
Said senior dev is either suffering from an utter lack of awareness, or desperately clinging to a horrible job at a horrible company.
These things can change over time, a job that is passable can become horrible with something as simple as someone drinking the kool aid (which in this case seems to be AI) or getting pressured by upper management to do something dumb. Getting a new job takes time if you have standards and/or want to be paid well. The market currently isn't that great either.
 
It's me, I'm the fallback plan. Oddly, LLM failure is the same as Pajeet failure. Give it to Dave the day before the customer demo and make him fix it. And it's never my part of the project that I end up fixing but something I've never touched or even looked at.
A.I. = Actually Indians, the bottom line behind every bit of tech enshittification that has happened over the last ~5 years. Thankfully there are companies that have already begun to curtail the use of AI for product development or in general. Even as an intern, one of the first things I was told is that I am not allowed to use AI for anything, complete blacklist. Both Ollama and pretty much every online AI are prohibited on company workstations. You get a warning first, then an automatic incident report if you try to somehow circumvent it. IIRC only upper management < are allowed to use some locally hosted LLM. Maybe it is a little extreme, but given the current trend of incompetent jeets getting into tech positions by pumping out AI slop or editing readmes on Jeethub, I'm all for it.
 
Since we are talking about LLMs again, I asked GPT-4o my usual test: To write me a mat4 (4x4) definition as a union with anonymous structs and arrays, mul function and inversion function for it in C.

Can you see the mistakes?
C:
typedef float f32;
#include <stdio.h>

// Matrix type with union access
typedef union {
    struct {
        f32 e[4][4];
    };
    struct {
        union {
            struct { f32 x0, x1, x2, x3; } v4;
            f32 v[4];
        } a, b, c, d;
    };
} mat4;

// Multiply 4x4 matrices without loops
mat4 mat4_mul(mat4 m1, mat4 m2) {
    mat4 r;

    r.e[0][0] = m1.e[0][0]*m2.e[0][0] + m1.e[0][1]*m2.e[1][0] + m1.e[0][2]*m2.e[2][0] + m1.e[0][3]*m2.e[3][0];
    r.e[0][1] = m1.e[0][0]*m2.e[0][1] + m1.e[0][1]*m2.e[1][1] + m1.e[0][2]*m2.e[2][1] + m1.e[0][3]*m2.e[3][1];
    r.e[0][2] = m1.e[0][0]*m2.e[0][2] + m1.e[0][1]*m2.e[1][2] + m1.e[0][2]*m2.e[2][2] + m1.e[0][3]*m2.e[3][2];
    r.e[0][3] = m1.e[0][0]*m2.e[0][3] + m1.e[0][1]*m2.e[1][3] + m1.e[0][2]*m2.e[2][3] + m1.e[0][3]*m2.e[3][3];

    r.e[1][0] = m1.e[1][0]*m2.e[0][0] + m1.e[1][1]*m2.e[1][0] + m1.e[1][2]*m2.e[2][0] + m1.e[1][3]*m2.e[3][0];
    r.e[1][1] = m1.e[1][0]*m2.e[0][1] + m1.e[1][1]*m2.e[1][1] + m1.e[1][2]*m2.e[2][1] + m1.e[1][3]*m2.e[3][1];
    r.e[1][2] = m1.e[1][0]*m2.e[0][2] + m1.e[1][1]*m2.e[1][2] + m1.e[1][2]*m2.e[2][2] + m1.e[1][3]*m2.e[3][2];
    r.e[1][3] = m1.e[1][0]*m2.e[0][3] + m1.e[1][1]*m2.e[1][3] + m1.e[1][2]*m2.e[2][3] + m1.e[1][3]*m2.e[3][3];

    r.e[2][0] = m1.e[2][0]*m2.e[0][0] + m1.e[2][1]*m2.e[1][0] + m1.e[2][2]*m2.e[2][0] + m1.e[2][3]*m2.e[3][0];
    r.e[2][1] = m1.e[2][0]*m2.e[0][1] + m1.e[2][1]*m2.e[1][1] + m1.e[2][2]*m2.e[2][1] + m1.e[2][3]*m2.e[3][1];
    r.e[2][2] = m1.e[2][0]*m2.e[0][2] + m1.e[2][1]*m2.e[1][2] + m1.e[2][2]*m2.e[2][2] + m1.e[2][3]*m2.e[3][2];
    r.e[2][3] = m1.e[2][0]*m2.e[0][3] + m1.e[2][1]*m2.e[1][3] + m1.e[2][2]*m2.e[2][3] + m1.e[2][3]*m2.e[3][3];

    r.e[3][0] = m1.e[3][0]*m2.e[0][0] + m1.e[3][1]*m2.e[1][0] + m1.e[3][2]*m2.e[2][0] + m1.e[3][3]*m2.e[3][0];
    r.e[3][1] = m1.e[3][0]*m2.e[0][1] + m1.e[3][1]*m2.e[1][1] + m1.e[3][2]*m2.e[2][1] + m1.e[3][3]*m2.e[3][1];
    r.e[3][2] = m1.e[3][0]*m2.e[0][2] + m1.e[3][1]*m2.e[1][2] + m1.e[3][2]*m2.e[2][2] + m1.e[3][3]*m2.e[3][2];
    r.e[3][3] = m1.e[3][0]*m2.e[0][3] + m1.e[3][1]*m2.e[1][3] + m1.e[3][2]*m2.e[2][3] + m1.e[3][3]*m2.e[3][3];

    return r;
}

// Invert 4x4 matrix manually
mat4 mat4_inv(mat4 m) {
    mat4 r;
    f32 *a = &m.e[0][0];
    f32 *o = &r.e[0][0];

    f32 s0 = a[0]*a[5] - a[1]*a[4];
    f32 s1 = a[0]*a[6] - a[2]*a[4];
    f32 s2 = a[0]*a[7] - a[3]*a[4];
    f32 s3 = a[1]*a[6] - a[2]*a[5];
    f32 s4 = a[1]*a[7] - a[3]*a[5];
    f32 s5 = a[2]*a[7] - a[3]*a[6];

    f32 c5 = a[10]*a[15] - a[11]*a[14];
    f32 c4 = a[9]*a[15] - a[11]*a[13];
    f32 c3 = a[9]*a[14] - a[10]*a[13];
    f32 c2 = a[8]*a[15] - a[11]*a[12];
    f32 c1 = a[8]*a[14] - a[10]*a[12];
    f32 c0 = a[8]*a[13] - a[9]*a[12];

    f32 det = s0*c5 - s1*c4 + s2*c3 + s3*c2 - s4*c1 + s5*c0;
    f32 inv_det = 1.0f / det;

    o[0]  = ( a[5]*c5 - a[6]*c4 + a[7]*c3) * inv_det;
    o[1]  = (-a[1]*c5 + a[2]*c4 - a[3]*c3) * inv_det;
    o[2]  = ( a[13]*s5 - a[14]*s4 + a[15]*s3) * inv_det;
    o[3]  = (-a[9]*s5 + a[10]*s4 - a[11]*s3) * inv_det;

    o[4]  = (-a[4]*c5 + a[6]*c2 - a[7]*c1) * inv_det;
    o[5]  = ( a[0]*c5 - a[2]*c2 + a[3]*c1) * inv_det;
    o[6]  = (-a[12]*s5 + a[14]*s2 - a[15]*s1) * inv_det;
    o[7]  = ( a[8]*s5 - a[10]*s2 + a[11]*s1) * inv_det;

    o[8]  = ( a[4]*c4 - a[5]*c2 + a[7]*c0) * inv_det;
    o[9]  = (-a[0]*c4 + a[1]*c2 - a[3]*c0) * inv_det;
    o[10] = ( a[12]*s4 - a[13]*s2 + a[15]*s0) * inv_det;
    o[11] = (-a[8]*s4 + a[9]*s2 - a[11]*s0) * inv_det;

    o[12] = (-a[4]*c3 + a[5]*c1 - a[6]*c0) * inv_det;
    o[13] = ( a[0]*c3 - a[1]*c1 + a[2]*c0) * inv_det;
    o[14] = (-a[12]*s3 + a[13]*s1 - a[14]*s0) * inv_det;
    o[15] = ( a[8]*s3 - a[9]*s1 + a[10]*s0) * inv_det;

    return r;
}

// Main with test
int main() {
    mat4 a = { .e = {
        {1, 2, 3, 4},
        {5, 6, 7, 8},
        {9, 10, 11, 12},
        {13, 14, 15, 16}
    }};

    mat4 b = mat4_inv(a);

    printf("Matrix A:\n");
    for (int i = 0; i < 4; ++i) {
        for (int j = 0; j < 4; ++j) {
            printf("%8.3f ", a.e[i][j]);
        }
        printf("\n");
    }

    printf("\nInverse of A (Matrix B):\n");
    for (int i = 0; i < 4; ++i) {
        for (int j = 0; j < 4; ++j) {
            printf("%8.3f ", b.e[i][j]);
        }
        printf("\n");
    }

    return 0;
}
 
Senior dev has an allegedly perfect solution, but manager insists on having someone else rewrite it, costing time and money.
Not sure why I'm bothering as you haven't read the post, just glossed over it and added your own assumptions into it. Never said I'm a senior dev (I'm a sysadmin), never said it was a perfect solution and it wasn't management insisting on the LLM rewrite. The guy tried to slide the LLM rewrite past management and get me to sign off on it as though it was a collaborative effort.
 
Can you see the mistakes?
It doesn't check whether the matrix is invertible (and will divide by zero like a retard if it isn't.) I can't actually be arsed to walk through it and see if it's using the proper indices, but I also would not be surprised if it mixed up row- and column-major across different functions.

Ironically I used ChatGPT to double-check whether matrices are invertible iff the determinant is nonzero, because I wasn't sure.
 
Back