Open Source Software Community - it's about ethics in Code of Conducts

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
The upcoming Git 3.0 release will change the default branch name from 'master' to 'main'. (https://lore.kernel.org/lkml/xmqqh5usmvsd.fsf@gitster.g/)
That release also will not compile without the Rust compiler. Total niggertrannykike victory.

It might be time to switch to Fossil, the non-niggerlicious alternative.
Beat me to it. Remember to always always always call your master branch "master". Or "cracker". Or anything besides the soy "main".
 
c doesn't expose the cache hierarchy explicitly, correct. guess what also doesn't really do that? x86 machine code. that's why you just don't be a retard that writes cache-oblivious algorithms
There are MSRs dedicated to finding cache line length on newer processors, or you can use benchmarks to infer the cache line length and cache size yourself. You can't exactly write a non-oblivious algorithm if you don't know the size of the L1 cache, the size of a cache line, etc... but you can certainly determine those parameters. C doesn't exactly make it straightforward to access MSRs to find that information, nor is there an easy way to obtain it via the standard library... but please, do go on.
this is the dumbest sentence ever posted to this site, even including all the things lolcows have said
Hey, I'm sorry you spent your life pretending to be a programmer only to have someone more knowledgeable tell you you're a faggot, but... no, wait, I'm not. Go back to playing with Logo and leave programming to professionals, kthxbai.
 
There are MSRs dedicated to finding cache line length on newer processors, or you can use benchmarks to infer the cache line length and cache size yourself. You can't exactly write a non-oblivious algorithm if you don't know the size of the L1 cache, the size of a cache line, etc... but you can certainly determine those parameters. C doesn't exactly make it straightforward to access MSRs to find that information, nor is there an easy way to obtain it via the standard library... but please, do go on.

Hey, I'm sorry you spent your life pretending to be a programmer only to have someone more knowledgeable tell you you're a faggot, but... no, wait, I'm not. Go back to playing with Logo and leave programming to professionals, kthxbai.

Learn to write processor-agnostic code in C rather than getting so deep in the weeds you need to query the CPUs internal constants to figure out how make your code fast.
 
I'm not wholly against using AI for code. I've used it myself on occasion. But it has a tremendous capacity as an enabler for people who don't know what they're doing.

I think the biggest problem for many people is that AI sounds incredibly authoritative even if it's completely wrong. If somebody tells you something confidently you're more likely to believe it, especially if you have no idea about the topic at hand. It's a psychological thing. Also as you have observed, many LLMs write very odd code. Declaring heaps of variables/functions where that isn't necessary, weird sanitizing and bounds checking in places where it makes absolutely no sense while confidently not doing it in places where it would make perfect sense etc.. just in general the feeling of spaghetti code that lacks an overall "theme". I mean a lot of AI code works now, but it often feels more like an abstract painting than a blueprint and I sometimes find that bigger heaps of AI written code are easier to rewrite than to change anything on, just because of how weirdly everything is put together.

AI code can be good if you give it very specific instructions what to write, what to use for it, and a very limited scope. When I write a piece of code I usually have a very specific picture in my mind how it will all fit together and what I will use to solve the problem and if you explain that to an LLM, it can realize that quicker than I can type and often even add some interesting twist I didn't consider, so I quite enjoy that process. Entire projects though, at this point? No. Especially not if the human doesn't guide and lead. That might change one day, who knows.

NLP though? Absolutely insane and way beyond human capabilities, even at this point. For shits and giggles I put that post you quoted into a current LLM without any context and asked it which nationality the author (me) likely is. It nailed it. Part of the explanation:

"They've been thought knowing specific workflows is knowing everything…"
This is a specific error where he meant "taught."
  • Phonetic Mapping: Native English speakers rarely confuse "thought" and "taught" in writing because the words function very differently, even if they sound similar in some dialects.
  • The "Th" Problem: German speakers, even those with near-fluent English, notoriously struggle with the dental fricative "th." In their internal monologue, "thought" and "taught" can map to the same phonetic cluster (often sounding like "tought").
  • Grammar Stucture: A native speaker might say "They've been taught that knowing…" dropping the "that" makes the sentence slightly Germanic in its density.

Now consider that it only needed seconds to come to this conclusion, way faster than a forensic linguist can read. Interesting times ahead.
 
Last edited:
Remember to always always always call your master branch "master"
In this time of confusing, arbitrary, and ambiguous language, it is good to provide clarity, and so I recommend upgrading the term "master" to "slavemaster" to be fully unambiguous about intent.
 
AI code can be good if you give it very specific instructions what to write, what to use for it, and a very limited scope. When I write a piece of code I usually have a very specific picture in my mind how it will all fit together and what I will use to solve the problem and if you explain that to an LLM, it can realize that quicker than I can type and often even add some interesting twist I didn't consider, so I quite enjoy that process. Entire projects though, at this point? No. Especially not if the human doesn't guide and lead. That might change one day, who knows.
When I have a little time I mean to just create an application that is pure empty classes and methods, no body - just the definitions, and ask an AI to populate all of them without altering the definitions, and see how it does. Will be an interesting exercise.

NLP though? Absolutely insane and way beyond human capabilities, even at this point. For shits and giggles I put that post you quoted into a current LLM without any context and asked it which nationality the author (me) likely is. It nailed it. Part of the explanation:
That was an absolutely fascinating example. Very impressive.
 
AI code can be good if you give it very specific instructions what to write, what to use for it, and a very limited scope. When I write a piece of code I usually have a very specific picture in my mind how it will all fit together and what I will use to solve the problem and if you explain that to an LLM, it can realize that quicker than I can type and often even add some interesting twist I didn't consider, so I quite enjoy that process. Entire projects though, at this point? No. Especially not if the human doesn't guide and lead. That might change one day, who knows.
>inb4 literate programming becomes standard fare once everything is vibe coded slop that takes a compsci phd to decypher
 
NLP though? Absolutely insane and way beyond human capabilities, even at this point. For shits and giggles I put that post you quoted into a current LLM without any context and asked it which nationality the author (me) likely is. It nailed it.
Just tried the same experiment with a few longer posts of mine. Almost every single time it thought I was American (I'm actually Russian). The only time it actually got it right wasn't because of grammar or anything, but rather because I unknowingly made a reference to something well-known in Russia/CIS, but not in the Anglo world, apparently.

If you're curious, Gemini thought this post (with the nationality redacted) was written by a Dutchman or some flavor of Scandinavian.
 
Just tried the same experiment with a few longer posts of mine. Almost every single time it thought I was American (I'm actually Russian). The only time it actually got it right wasn't because of grammar or anything, but rather because I unknowingly made a reference to something well-known in Russia/CIS, but not in the Anglo world, apparently.

If you're curious, Gemini thought this post (with the nationality redacted) was written by a Dutchman or some flavor of Scandinavian.
ok so i guess the forensic linguists shouldn't be extremely worried about their jobs just today, because llms will end up being a nice tool that can highlight (and i mean highlight, it could literally highlight spans of text with nice hover tooltips instead of spitting out walls of text, the wall of text is the laziest possible way to hook up any llm) potential clues really fast instead of actually doing their entire job
 
The only time it actually got it right wasn't because of grammar or anything, but rather because I unknowingly made a reference to something well-known in Russia/CIS, but not in the Anglo world, apparently.

I didn't quote the entire thing but in my case, it also seemed to get my nationality mostly from the content of the post, less from the grammar of that one sentence, which if you really think about it, is even more impressive, for, you know, a computer. I've been doing that here and there because I like to sperg out with long posts and I like to see what any given LLM of the time can conclude about me without any given context from the text alone. They've become quite a bit more accurate over time. You also highlight the biggest problem with them in your post I mentioned, that they like to be confidently wrong. A simple and preferable "I don't know/I'm not confident" can sometimes be reached with very careful prompting about being confident about it's own output (as can be a confidence score, which is actually a recognized technique to reduce hallucinations) but it's unreliable and with the wrong kind of prompting you risk "leading it into" a reply you didn't want to cause. For a while this was considered to be a result of RLTHF but it seems to be more of a mathematical inevitability really. Well, they're basically big pattern probability recognition engines.

It's still crazy to me how easily many natural language processing problems can be solved by them, especially considering how impossible these were to solve for computers for the longest time. It's just a pity that it's still a bit like herding cats at the moment.

When I have a little time I mean to just create an application that is pure empty classes and methods, no body - just the definitions, and ask an AI to populate all of them without altering the definitions, and see how it does. Will be an interesting exercise.

I'd add on top giving it literate programming flows, like the other poster mentioned. Sometimes the results are garbage, but sometimes you can get a gem. As you have noted a lot of AI code is very easily recognizable as such but in a limited scope, I've seen acceptable, even quite neat, AI code. Very interesting is the current tendency of LLMs to produce acceptable code of more obscure programming languages, which can't have a lot of "natural" representation in the datasets. With older models, code of such languages was often not even syntactically correct.
 
The Cloudflare outage was caused by Rust.
View attachment 8188536
Source, Archive
It failed in the exact way that Linus said it would if it was ever integrated into the Linux kernel and why it was unacceptable to ever rely on rust for anything critical—panic and crash to preserve "memory safety".
So half the internet was taken down because Rust's solution to memory safety is to just panic and crash.
There actually is a language specifically designed to prevent these sort of issues and it was created by the US DoD instead of mozilla trannies.
It's called Ada, specifically, SPARK Ada which uses formal verification.


No, it's not too much for programs that run half of the entire internet to require formal verification.
 
I just wrap my code in the language equivalent of:
Code:
while 1:
  try:
    all the code
  except:
    pass
I haven't had a problem with my code crashing since I started doing that.
Thats unironically what the code looked like at a job I had before. It was code that ran on a server and it had to be crash resistant. It did indeed never fail.
 
Thats unironically what the code looked like at a job I had before. It was code that ran on a server and it had to be crash resistant. It did indeed never fail.
i mean this is really almost equivalent to having a daemon somewhere that automatically restarts your code when it crashes, but you also get to skip things like setup, so it might be genuinely useful in a few cases
 
Well it took him from hitting a wall and asking me how to do something, to producing working but wrong code that I had to continuously keep an eye out for and which took ages to unpick when I did
But why though? Did he not test it and just commit the work?

See, to me AI is actually amazing. A year ago it was retarded, but now it can actually write decent code. That is some fast advancement. The tool actually works relatively as intended, that should be a joy as experienced programmers don't need to hand code monkey work to actual code monkeys, you can just have the AI do it. I test the code, I review the code, I tell the AI it's a retarded nigger and it should die when it makes a mistake, all in balance. If it's something simple, I might not actually need to write a line of code myself for that particular task or change anything.

It's gotten to the point you can give it the backend code and it will spit up a decent and usable frontend. That is just so cool and fast. If you're already a programmer you can use it as a little helper, and as long as you review the shit and know what's going on you don't really have downsides...Maybe wasted time yelling @ the AI when it does something wrong. I've also had it identify and fix some minor bugs like a missing assignment in a 10kloc file and whatnot.

Seriously, try a paid(or trial) version of codex/cursor/the new thing from gemini/etc. The free ones generally have a very low quality model and only spit out garbage, which is why I'm talking about the paid one. But don't do autistic stuff like trying to get very smart with it to "test" it, give it exact instructions and it will likely do something useful for you.

To me this is like industrialization, power tools, motor vehicles, hell even the simple hammer. It's a new tool you can use. You can't just say >make me github<, but you can add new functions, modify existing ones, fix bugs, make new modules, even skeletons of simple projects.

You can do full projects too most likely, but you have to guide it every step of the way.

I know kiwis aren't really anti AI and you're not either, I just find this really cool and child me dreamt of shit like this
But it has a tremendous capacity as an enabler for people who don't know what they're doing.
This is the crux of the issue. Every retard and their mother pretends to be a programmer now, which does piss me off. But I don't think experienced programmers should care. Just use the tool, let it speed up whatever it can speed up and let the retards be retards

Maybe recruiters will finally look more into how the person thinks as a programmer vs whatever bullshit code they forward to people on have on github. I know programmers who genuinely don't understand why having raw sql queries all over your files and passing straight POST input into them is wrong, or why having duplicated functionality and 0 modularity all over is a mess. When your db functions are in 30 different files and being imported from one another or just rewritten when the dev forgot he already had a function like that your only option is to rewrite it from scratch
 
I don't think experienced programmers should care. Just use the tool, let it speed up whatever it can speed up and let the retards be retards
At the end of a day, a programmer's work speaks for itself. Even if all that programmer does is copy paste stuff from other projects, everything else about the product he makes speaks volumes about him as an individual and prospective employee. But there are so many choices to make in your average project.

My experience is that LLMs get me to alpha faster. They're retards who code like they're drunk on the Kool-Aid the local cargo cult serves, but if your use-case is well-attested, they work great. I think that by the time I refine the code they give me, it may have been quicker to just DIY, but there's a certain effort-of-execution that they drastically reduce for me. The effect resembles the good parts of "pair programming", which as someone on the spectrum, I can't stand.

But for something as straightforward as "how do I access command-line arguments in [insert Scheme implementation here]" they are useless garbage. As is, anyhow. Could improve. Not with 2025 Jeetware dev practices.
 
Back
Top Bottom