Open Source Software Community - it's about ethics in Code of Conducts

I assume its run speed can be measured in minutes per token instead of tokens per minute?
350 in 3½ minutes. "On a machine born when disco was still a survivable condition"
I suspect having the code written in no-lube 70's assembly rather than vibe-coded rusty troonslop makes a bit of difference.
 
Last edited:
I bet you could train an LLM to spam low quality shit commits because even if it sucked at coding, it could social engineer autists into approving its "contributions."
Train a whole LLM just for that? A chatbot with pre-programmed answers like ELIZA could achieve this already. Let me demonstrate:
  1. Commit: [add] Code of Conduct
  2. Commit: [add] Ethical Source License, Opinionated Queer License
  3. Commit: [fix] Remove transphobic and anti-Black language from project structure and documentation
  4. Create pull request
  5. Comment: Hey folks, for the safety of our diverse contributors in the MIM2S671488SLGBTQIAASN33DZ+ community, I'd like to suggest the appointment of a Community Manager. As a queer trans disabled anti-fascist womxn of color with 11 years of experience in tech community moderation, I'm pleased to announce that I'm able to take on this important role.
  6. Comment (in case of opposition): Our diverse community cannot stand silent in the face of discrimination and microaggressions. [Nazi bar story]. [Paradox of tolerance explanation]. In a world where queer trans immigrants are facing genocide, this is our call to act.
 
350 in 3½ minutes. "On a machine born when disco was still a survivable condition"
I suspect having the code written in hardcore 70's assembly rather than vibe-coded rusty troonslop makes a bit of difference.
That's more then a little terrifying. Imagine if we had an AI built in assembly running on a modern AI server farm.
 
That's more then a little terrifying. Imagine if we had an AI built in assembly running on a modern AI server farm.
I'm not going into details, but I am thinking of getting back into programming.
Haven't touched that since tard-wrangling a COBOL system into y2k compliance in 1999.

Edit: But my sub $200 laptop (thank you, Amazon returns) has at least TWICE the the power of a PDP-11.
*lowers lighting, dons hoodie*
It's a Linux system, I know this...
 
Last edited:
Speaking of AI, and backtracking to the "rebuilding technology after Ragnarok" side-quest from a few pages ago, I see Dave Plumber has shoved AI onto a PDP-11.
No word yet on if it can run Crysis.
I watched the video. Everyone, watch the damn video. Especially if you're from my homeland (the AI Derangement thread). It's a great and beginner-friendly introduction to how "AI" works by introducing a model training suite dedicated to the hardware, built from the ground-up in Assembly, and training a very stripped-down model specializing on one algorithmic task that can run on a 6MHz CPU and 32KB of memory.
That's more then a little terrifying. Imagine if we had an AI built in assembly running on a modern AI server farm.
Maybe the price of GPUs, RAM and water would go down.
The key message of the video that is repeated with many colorful analogies is that the math of training a transformer-based neural network is basic linear algebra any computer, or (You) for that matter, can do. The Assembly engine is for compatibility with the hardware rather than a generalized optimization. For modern big chungus models, you might be able to strip some operating overhead with an Assembly engine, but the bulk of resource usage goes to loading the model itself and the primary bottleneck is the hardware's data bandwidth and hardware-level operations per second. Everyone is trying to make models more efficient by changing up model architecture so you don't have to load as much data or do as much math to get the same result quality. Using a lower-level language for training and inference engines won't provide big leaps in performance.

This is the model spec card. It's no ChatGPT.
1776308382740.png
There are more limitations of the training suite (like integer precision rather than floating point and no libraries), but these are again the fault of the 1970s hardware and its 32-bit registers.
1776309827136.png
350 in 3½ minutes. "On a machine born when disco was still a survivable condition"
I suspect having the code written in hardcore 70's assembly rather than vibe-coded rusty troonslop makes a bit of difference.
The model took 350 steps, meaning that it processed 350 batches of data samples in 3½ minutes, to achieve a negligible loss (wrongness) value and complete training. The output tokens per second, which refers to inference, wasn't computed numerically, but you can see from the accuracy check at the end of training that it was fast. The CPU only has to calculate 8 input tokens which is fast enough at 6MHz, and the final model size is 6KB which fits nicely into the memory.
 
Last edited:
I watched the video. Everyone, watch the damn video.
Dave is the bloke that created the OG task manager, and that damn W2K Space Cadet Pinball.
He has opinions about modern coding that he does not publicly state on YouTube.
 
He has opinions about modern coding that he does not publicly state on YouTube.
Against vibe coding, I would assume? I can't blame him, if I was ex-Microsoft OG engineering and witnessed what the jeets have done to the successors of my projects, I'd go scorched earth on anything associated with them too. The diagram visuals appear AI-generated thoughbeit.

Dave has the professionalism to not sperg about stealius artistus/waterus deletus/AGI, unlike the fellows documented in the AIDS thread. I didn't notice any AIDS in the first 18 minutes explaining the hardware, model architecture and results (the remainder were advertiser-length padding reiterating the conclusion of the video that I tuned out of). "It's just linear algebra guys" is TRVE and it's what engineers have been trying to say before they got drowned out by corporate hype. Knowing this doesn't downplay the usefulness of large models, and provides a good starting point to understand the bigger-brained math that builds on the basic concept.
 
Knowing this doesn't downplay the usefulness of large models, and provides a good starting point to understand the bigger-brained math that builds on the basic concept.
No, but as Dave says "seeing how the sausage is made" and stuffing in to a PDP-11 should work wonders for anyone dazzled by AI. If they are smart enough.
 
Why does everything have to be infested with this shit?

I saw some discussion about the Gleam language, which is another in the Erlang/Elixir family (running on Beam), so I checked out their home page:

View attachment 8820336

A smiling star for a programming language. "Friendly." A bit irregular, but okay.

Scrolling down on their home page, you then see this:
View attachment 8820344

To answer the why question, I looked into the authorship. First, note that the following is in the HTML of the above paragraph:
Code:
<!-- Hello! If you make a PR changing this I will ban you. -->
The repo for the website is here:

The addition was made by Louis Pilford, the creator of the language, whose pronouns are they/them or he/him:
View attachment 8820403
His website: https://lpil.uk/
Bluesky (of course): https://bsky.app/profile/lpil.uk

Pooner? Probably not. More likely some Drew Devault type faggot.

ETA: Why the Lunacy negrates??
On the matter of language itself it is profoundly ass, they STILL have no proper OTP bindings (currently supports supervisors and part of gen_server lamo). They market themselves as a language that scales and has type safety, yet to build a scalable system you have to use erlang libs. Without that type safety.
And they don't support multiple function heads.
 
As much as I despise Node (who the hell used JavaScript on the client side, took a look at the infinitely wide range of decent to great options for writing server-side code, and went, "you know, instead of using any of those, I want to use whack-ass JavaScript on the server too")
It's so their web page and "client side" can be the same thing so they only have to write it once. Similar reason why everything UI is made for touch screen mobile - a phone is the lowest common denominator. Yeah buttons are the size of your hand on desktop like some kind of duplo kids toy interface but they only had to write it once.

So forevermore all applications will take 50gb of memory per tab and have a UI designed for 50IQ niggercattle on their galaxy S80 they bought on credit (that you will pay for when they default). Enjoy!
 
You've been infected by the techtranny mind virus if you actually believe that using AI in a project means that internal quality control and testing no longer exists, I'm afraid. "Generate, commit and push to production in one prompt" is one jeet-preferred approach, not the only approach that exists. You can ask AI to generate code atomically, review every change manually, and deny changes that aren't up to the project standard. As for the tranny's claim that you can "check for agent files and contributors", it's trivially easy to keep AGENTS.md or CLAUDE.md out of the public repository (just leave them untracked) and to not have an "AI Agent" account on the team by using a non-integrated agent interface.

Besides, if your project is that serious, you should already be version pinning and code reviewing dependencies regardless of whether the dependency uses AI or not.
Well, thanks to that list I found out that the "jeet-preferred approach" has now spread to fucking vim, I was wondering how they managed to introduce several massive security issues in fucking vim so quickly. I didn't expect it to deteriorate so quickly after Bram died, even with "modern developers" all over.

Fuck DeVault, but his vim-classic is sorely needed.
 
It's so their web page and "client side" can be the same thing so they only have to write it once. Similar reason why everything UI is made for touch screen mobile - a phone is the lowest common denominator. Yeah buttons are the size of your hand on desktop like some kind of duplo kids toy interface but they only had to write it once.

So forevermore all applications will take 50gb of memory per tab and have a UI designed for 50IQ niggercattle on their galaxy S80 they bought on credit (that you will pay for when they default). Enjoy!
Hypothetically if your server side code and your client twice code are identical could someone look at the client side code they downloaded by browsing your website and spot any vulnerabilities that can potentially be exploited in your server side code?
 
Hypothetically if your server side code and your client twice code are identical could someone look at the client side code they downloaded by browsing your website and spot any vulnerabilities that can potentially be exploited in your server side code?
...they don't even need to be identical. Just looking at how the two work together can give you a ton of insight into the backend and potential attack vectors.
 
Hypothetically if your server side code and your client twice code are identical could someone look at the client side code they downloaded by browsing your website and spot any vulnerabilities that can potentially be exploited in your server side code?
Wasn't that semi-recent massive security issue in some absurdly common js framework because of "magic" frontend/backend integration? You don't need to think about if the code is run client or server side, oopsie-doodles now you're executing untrusted code deserialized from a malicious client.
 
He has opinions about modern coding that he does not publicly state on YouTube.
Oh but he does, and it's both based and level headed, a bit :optimistic: at time even


Rant: I was talking to an old friend, and at some point he blurted out the following: "I was thinking about getting those new Mac Minis with 64GB cause 32 isn't enough for me anymore, I run out of RAM". When pressed wtf does he need so much for, he described that there's millions of Node modules that take up most of this shit. I had a Vietnam flashback moment of me getting perfectly by on 1MB Chip and 8MB of Fast on my Amiga, but then I asked WTF was he doing just a couple of years ago when 16GB was quite enough. It kinda spiraled out of control then with him stating that "You just don't know modern requirements".

I just don't have the words anymore. 64GB to process strings upon strings and spit out some HTML over a socket. These people are fucking mental
 
Last edited:
64GB to process strings upon strings and spit out some HTML over a socket. These people are fucking mental
Oldfags who grew up with their speccy's will be vindicated when the LLM companies buying all the chips + identity mandates for computer use have us hustling on the grimy streets of Neo-London for three megabytes of hot RAM to jack into our unregulated consoles
 
Last edited:
Back
Top Bottom