Diseased Open Source Software Community - it's about ethics in Code of Conducts

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Especially with the footnigger announcement about tighter coupling to systemd, alternate administrations for managing X are going to hit the bigtime from the anti-systemd crew as well. This is probably why freedesktop footniggers went thermonuclear on xlibre, because the systemd coupling was being discussed already, and a stable alternative adminstration for X would interfere.

Is the xlibre faggot the chosen one? I am suspicious. He'll need to put a team together. But maybe...
X11 is in a really funny spot where a lot of the audience is like "We know it has limitations, but everything we want works at this point." and the core team is saying "Yuck!" Usually it's the other way around.

I think a big problem with the X successors is they all tend to say "OK, we're ready for battle!" but then things that you expect from X11 like copy and paste, being able to use the windowing system remotely, etc. aren't supported and then people get annoyed.

It's a testament to the design and modularity of X11 that it has lasted this long. I am curious to see if this xlibre project is serious and will extend the project's lifetime even further. There's also the question of non-Linux OSes like the BSDs... I don't think they are seriously looking to get away from X11 at this point, so I suspect they may help maintain X11. From what I know, Wayland is at least supported on FreeBSD but I don't know how many users it has. Actually, I'm not even sure how many people use FreeBSD as a desktop at all anymore.
 
Just interjecting to say there is no such thing as a "good" AI - Grok, ChatGPT and Claude are all too retarded to listen to basic instructions. I was trying Grok out and asked it to tell me the pinout for a Pi Pico which it was not able to correctly fetch, and after being told it is wrong, it still confidently insists that Pin 39 actually does supply 3.3V power to its connected wire. In reality it gives 5, which can fry a lot of electronics that aren't meant to take that much power. ChatGPT also kept confusing pins, only Claude actually took the feedback as gospel and went with it. Be really fucking careful if you use AI for sensitive tasks.
 
AI gives answers that are at best slightly above average. The average person however is clueless about most (if not all) things that surround him. When you're feeding it every single book, every single webpage, etc. it doesn't matter how much quality control you do. Bad, plainly wrong answers WILL seep in, and they will appear in weird and unexpected ways. Not to mention hallucinations and censorship. LLMs are not thinkers. They are probabilistic neural networks twisted by execs and marketing into a crude mockery of nature's perfection.

On a semi-related note, as the Internet gets filled with an ever growing amount of AI slop, will models that were trained before ${current_year} become more valuable than the newest and shiniest ones?
 
Be really fucking careful if you use AI for sensitive tasks.
Hallucination is a serious problem where the LLM will sometimes confabulate facts if it has low confidence.

I'd say any sensitive task shouldn't use AI at all. For it to be really accurate, you'd have to have something checking the work of it, and if you want to do that in a comprehensive way it somewhat approaches the effort of having to just construct the thing in the first place.

There's also the effect of familiarity. If I write a program myself and it starts small and I gradually expand it, I'm there for the whole thing. I'm growing it and getting familiar with it at the same time. In a world where you ask the LLM for a semi-complex program, it's more or less just handed off to you. You don't know the thought process that went into it, because you didn't "grow it" yourself.

There are already a lot of people saying: "Wow, I can't code at all, but thanks to ChatGPT I can make an app now!". Everything works great for the beginning of the project. However, once things approach a few thousand lines and things start getting more complex, the LLMs start to drop the ball. Compound this with the fact that a novice or outsider has no experience with larger projects and other asking the LLM "Add feature X to the program" they do not have the knowledge to instruct the LLM how to grow and organize these larger projects.

I believe by the time AI gets to the point where someone that doesn't know how to code can release a complex app and/or platform, that same AI will be intelligent enough to do many kinds of other jobs too: attorney, accountant, etc. The role of LLMs in coding or other tasks currently seem to be most helpful as a "sidekick" where the human is firmly in charge and the AI is used to implement smaller sub-tasks on the overall project.

I'd love for someone to prove me wrong and show me something non-trivial that a total outsider built solely with an LLM, while knowing nothing about software development. The follow-up to that might be to quibble about what "non-trivial" means exactly.

My "anecdata" shows many people falling off or having trouble a few thousand lines into the project and either walking off or having to bring in a human developer. I'd be interested to see if there is anything out there where the LLM is driving instead of a person.
 
The role of LLMs in coding or other tasks currently seem to be most helpful as a "sidekick" where the human is firmly in charge and the AI is used to implement smaller sub-tasks on the overall project.
And honestly, that's pretty big. I didn't think I'd see it in my lifetime until about ten years ago. You can talk to computers now and to a reasonable degree they can give intelligible answers that fit to what you actually said, isn't that cool? I'm surprised how dismissive people are about this. You young people have no sense of wonder man...

But I feel with AI the goalposts will just move forever from this point on now. "Sure, it did build an utopian society and cured all human diseases in a simulation, but look how terrible that futuristic city's universal care hospital reception waiting areas are. I mean pastel, seriously? Stupid robot."
 
But I feel with AI the goalposts will just move forever from this point on now. "Sure, it did build an utopian society and cured all human diseases in a simulation, but look how terrible that futuristic city's universal care hospital reception waiting areas are. I mean pastel, seriously? Stupid robot."
100%. The human condition is reaching some new height, which then turns into a baseline, which then turns into an expectation.

People are already over the fact that there are semi-intelligent AI tools that can do low-intelligence tasks. Our brains are configured to always be hungry for the next thing and then once we get it to set the bar even higher.
 
Just interjecting to say there is no such thing as a "good" AI - Grok, ChatGPT and Claude are all too retarded to listen to basic instructions. I was trying Grok out and asked it to tell me the pinout for a Pi Pico which it was not able to correctly fetch, and after being told it is wrong, it still confidently insists that Pin 39 actually does supply 3.3V power to its connected wire. In reality it gives 5, which can fry a lot of electronics that aren't meant to take that much power. ChatGPT also kept confusing pins, only Claude actually took the feedback as gospel and went with it. Be really fucking careful if you use AI for sensitive tasks.
AI gives answers that are at best slightly above average. The average person however is clueless about most (if not all) things that surround him. When you're feeding it every single book, every single webpage, etc. it doesn't matter how much quality control you do. Bad, plainly wrong answers WILL seep in, and they will appear in weird and unexpected ways. Not to mention hallucinations and censorship. LLMs are not thinkers. They are probabilistic neural networks twisted by execs and marketing into a crude mockery of nature's perfection.
Hallucination is a serious problem where the LLM will sometimes confabulate facts if it has low confidence.

I think the biggest problems with LLMs is the trust people have in them, and also how very few people (software engineers who should know better included) understand how they work. It's literally a data structure with weights between a bunch of words (or really parts of words, i.e. tokens or parameters) to every single other part of a word. Your prompt just gets tokenized and run through a transformer. The recent innovations of attention blocks allows every part of that prompt to change the weights of the next possible word output.

An LLM isn't going to be able to tell you what pin on a Raspberry Pi outputs 3.3V because that knowledge is literally not encoded anywhere. It's trained on a corpus of text, plus human reinforced training, to produce text that looks like regular human writing. When it gets things right, it's literally always by accident. LLMs produces stuff that might look like legal text, but of course it's going to not generate real cases because it's just predicatively creating stuff that looks like legal text.

3Blue1Brown has the best basic explanation in ~7 min I've seen that you can hand to most normie friends:

and if you remember your basics of back propagation from your AI class, his DL5/DL6 videos do an excellent job of going into the details of how the transformers work.

Some companies are using RAG systems which pairs LLMs with an actual document store it can retrieve information from, which might give you somewhat more useful results. But the base LLM is massive and can't just be "reprogrammed." It's not programmed at all. It's trained by a combination of billions of pirated texts and 1000 people in a cube farm for a year clicking on the generation from a set of responses that sounds the least retarded.

Edit: And just to keep things on topic, there are no Open Source models. The stuff you get on Hugging Face are the trained parameters, but the companies who make them can't release the "source" or training material, because it's literally tons of copyrighted stuff. You're not going to find any George R.R. Martin text actually in the models, because that text just trained the weights. The whole OSI definition over open source LLM models is what caused some of their recent dramas/shakeups.
 
On a semi-related note, as the Internet gets filled with an ever growing amount of AI slop, will models that were trained before ${current_year} become more valuable than the newest and shiniest ones?
Possibly. That is kind of already the case with older more robust computer hardware vs plastic chinkware current day notebooks style laptops, for instance. In fact, I'd go a step further and say that instead of massively overtrained corporate-owned AI, every user would benefit far more from having a locally 'grown' AI of their own, which some people are already kind of doing. For instance feeding a coding assistant with your own code, or otherwise code for a project you're expected to contribute to alongside some basic generic examples of syntax for the packages, libraries and languages you expect it to use would likely outdo a more massive LLM since it accounts for data poisoning from feeding it massive amounts of generic scraped data. That way, it *should* have a much more granular understanding of the project and your coding style, which it would then be able to ape. Managed to pull it off myself on a much smaller scale for mass producing SEO content that very closely copies my style of writing back when Deepsneed first got released. Not my proudest moment, but it was a good learning experience.

Anyhow, I am by no means a machine learning expert so my technical knowledge on the topic is pretty limited, but isn't there some way to tweak the "confidence level" of an LLM to prevent it from hallucinating? Or is that just a quirk of its instructions to provide answers that the user will find satsifactory?
 
  • Like
Reactions: Toilet Paper
isn't there some way to tweak the "confidence level" of an LLM to prevent it from hallucinating?
Not really. You can lower the "temperature" (roughly: how much it's allowed to deviate from the most likely response), but this tends to make the output vapid, robotic and more useless in general. Again, keep in mind that it's basically a souped up autocompleter with an unimaginable amount of data behind it. "Confidence", "hallucination", "thought process" are all just intentionally suggestive framing. Even the presentation of ChatGPT as a chatbot is suggestive framing. You're not talking to something, it's just predicting how the "conversation" would continue on its end.
 
Can someone please give me a run down on what's been happening with the sudo-rs shit? Over the past day or two, I've been hearing about C-niles and rust trannies fighting over this terminal change.
Another case of Rustrannies trying to force their rewrites into everything and Canonical happily going along with it. That said, this is probably the best case for it. Sudo is a bloated (a quarter of a million lines of code) and occasionally-buggy mess that there have already been moves to abandon - OpenBSD adopted its doas replacement about a decade ago now (although that's still in C) and that's percolated out through the BSD and Linux worlds; it's what I use where possible.

sudo-rs has just been the victim of being conflated with other Rust rewrites, which have had more questionable benefits and stronger ideological pushes behind them. A lot of the anger is because this has happened in the wake of Canonical switching from the GNU Coreutils to the uutils Rust rewrite, which offers minimal benefits and makes FSF-style autists apoplectic because it's MIT-licenced rather than GPL - sudo was always under a permissive licence, so that's not relevant here, but nuance starts to fade when the Cult of Rust acts identically regardless of context.
 
but nuance starts to fade when the Cult of Rust acts identically regardless of context.
You don't understand, Rust will make me a good programmer. I think the theory that Rust is being astroturfed as a way to sneakily change the culture is quite likely, since it allows them to push out the old guard in favor of the ideologically useful group that makes up most of the Rust dev demographic.
 
You don't understand, Rust will make me a good programmer. I think the theory that Rust is being astroturfed as a way to sneakily change the culture is quite likely, since it allows them to push out the old guard in favor of the ideologically useful group that makes up most of the Rust dev demographic.
I think what happens a lot of times is that vs. C, Rust makes some failure modes impossible (unless you are in "unsafe"). That single fact makes people want to rewrite the world in Rust. However, this trade-off doesn't come with no downsides... Rust must also be balanced with C's extremely long lifetime. People are very, very, very used to dealing with C and not very used to dealing with Rust.

You also have kernel developers with decades of experience with C and the Linux kernel as one of the artifacts. There have been a few attempts at a Rust OS, but none with the scope of Linux. Trying to get someone with 30 years of C experience to toss everything they know and love for another language is really hard. You also have the fact that when C++ appeared, it somewhat had the hype level of Rust, yet the whole world wasn't rewritten in C++ and it didn't save the world.

Potentially after years and years Rust will accumulate such a body of software that it will be well established that the tradeoffs it makes has better outcomes, but for now, a lot of people are choosing to sit on the sidelines and see who wins.

I'm not sure a language with ANY advantage could unseat C in the minds of Linux kernel / system hackers, but I guess we'll find out over time....
 
I'm not sure a language with ANY advantage could unseat C in the minds of Linux kernel / system hackers, but I guess we'll find out over time....
I feel like Zig would be better here, if only because its toolchain fully supports C, allowing to use both in the same project without too much hassle setting up. It also consistently (albeit slowly) introduces more language/toolchain features without jerking itself off at every turn. Alas, it also means it doesn't have a rabid fanbase that shoves it into every hole while screaming about how it's blazing fast and le memory safe.
 
Trying to get someone with 30 years of C experience to toss everything they know and love for another language is really hard.
TBH backhandedly insinuationg the only reason for lack of adoption is stubborn old people doesn't do much to undo the belief that rust is the tranny faggot language

Potentially after years and years Rust will accumulate such a body of software that it will be well established that the tradeoffs it makes has better outcomes, but for now, a lot of people are choosing to sit on the sidelines and see who wins.
Honestly this is never going to happen, its very communist in its revolutionary nature and unwillingness to take responsibility for its failures
 
There definitely are women coders out there but I think it's a shockingly high number of them that used to be male.
My experience has always been that (natal) "Women in Code" and Women who know how to code are two pretty much entirely non-overlapping groups. I know many women who, by virtue of working with statistics, know how to get shit done with R or Python; by virtue of working in an office can write incredibly useful batch scripts and only marginally horrifying VBA. These women know how to code, but aren't "Women in Code", they aren't salaried software developers, don't hold degrees and as such don't count.

On the other side of things, there's the absolute state of new Comp Sci grads. I would hesitate to equate degree holders with actually capable programmers. Women in Computer Science do happen, but the handouts and concessions made for them basically guarantee they'll skew to the low end of an already skewed bell curve.
 
TBH backhandedly insinuationg the only reason for lack of adoption is stubborn old people doesn't do much to undo the belief that rust is the tranny faggot language
Whoa whoa whoa.... if someone's been using C their whole career, loves it, has no problems with it, that has nothing to do with "stubborn old people". If anything, that's a "problem of success". The people using C for that long know of the strengths and weaknesses of it and have chosen to stay with it and avoid/deal with the weaknesses.

Typically, the people that find Rust the most attractive are the ones who haven't done a huge amount of work in C and/or are upset with some of the well-known issues with it. C is not a perfect language, because the idea of a perfect language doesn't exist. C makes certain concessions to achieve a given footprint and resource constraints. Other languages make other tradeoffs.

You don't have to just call Rust the "tranny faggot" language, you can criticize it for: slow compilation, difficult error messages, difficult to understand semantics around ownership, lack of stable ABI, etc. For someone who feels perfectly productive and happy in C, they have no incentive to swap.

If someone's happily been driving the same type of car for 30 years, do you think they'll just switch to a different brand all of a sudden for no reason? People tend to stay with what they like. That's not an indication of being stubborn, that's an indication change for no reason is pointless, or many times counter-productive.
 
You don't have to just call Rust the "tranny faggot" language
Thats its main defect though

slow compilation, difficult error messages, difficult to understand semantics around ownership, lack of stable ABI, etc
Also the premise of enforced memory safety innately increases the difficulty of any problem and usually isn't worth it. The fact that basically a super complex static analyzer is innate to the language and that the language has no real other benefits is what really cripples it. With C you can in fact statically analyze it whichever way you want and otherwise still have a pretty good syntax. Rust basically only has the one gimmick, and one that isn't appropriate to apply in most situations tbh.
 
Thats its main defect though


Also the premise of enforced memory safety innately increases the difficulty of any problem and usually isn't worth it. The fact that basically a super complex static analyzer is innate to the language and that the language has no real other benefits is what really cripples it. With C you can in fact statically analyze it whichever way you want and otherwise still have a pretty good syntax. Rust basically only has the one gimmick, and one that isn't appropriate to apply in most situations tbh.
I might be the unpopular opinion here, but I think Rust has its place for stuff that will get scrutinized by malicious actors. The fact that it was originally made to power Mozilla's new java script engine is a perfect example for that. The reason why consoles like the Wii or the PS4 got hacked were due to bugs in Opera/WebKit. Rust will never make a shitty coder good, however a very competent coder is still human and will make subtle mistakes. Rust will do its best in avoiding potential disaster.
 
Back