Open Source Software Community - it's about ethics in Code of Conducts

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
it also expects a large amount of easily usable stack space to be available, which is why languages like glsl are popular for gpus
I have written a LOT of code in C for various 8-bitters.
Including "C" compilers that do not have a stack.
That is actually a feature, not a bug, for embedded systems where memory is scarce and you don't even have an OS to step in and clean things up if your app runs out of memory.

A very nice feature of C on embedded systems is that it is so very easy to tell what the C code actually compiles to if you have some rudimentary understanding of the ISA and how a CPU works. It is also very easy to interface with hand-written assembler.

No stack. No dynamic memory allocation. It is a feature.
(not to you but who you responded to. C works just fine on modern current day 8-bitters. The software that runs on the 8032-clone or equivalent inside your sd-card is written in C)
 
Lets be real, anything that isn't a lisp is goycattle corposlop.
(THANK-YOU 'LISP-SISTER) ; GOD BLESS 🙏🏻🙏🏻🙏🏻
Also: firejail or bubblewrap?
bubblewrap is a lot easier to fuck around with and do random hacks with, it's a lot like qemu except not really because it's something completely different from qemu but you know what i mean
also less suid i think
for sandboxing, flatpak uses bubblewrap and a weird dbus proxy
also recently gnome added a gay rust thing to gdk-pixbuf to load images with bubblewrap sandboxes (recipient of the 2025 annual This Could Have Been a Shell Script You Fucking Overengineering Nigger™ Award®)
 
It's not a low-level language, it is a high-level language that was designed to be portable at a time where systems varied a lot and compilers were simple. Though, you can use it for low-level / non-portable code if you really want to.
It's being sold as low-level compared to everything else out there. Moreover, I can remember it being alternately described as high-level or low-level depending on whether you were trying to sell it as a "portable assembly" or "like FORTRAN, but better". Truth is people sold it as both depending on who they were talking to.
I think C's programming model aligns pretty well with modern x86 and other common architectures. C really only falls apart if you want to run it on an ancient 8-bit cpus.
It really doesn't. We live in a world of multicore, superscalar CPUs with multiple layers of cache that look nothing like the PDP-11s the C virtual machine is predicated on.
They are guaranteed to be at least a minimum size. It is part of what makes it portable. Also guaranteeing consistent behavior and fixed width words on systems where the native word size is some schizo shit like 17 bits probably would complicate the compiler and make code needlessly slower.
"At least a minimum size" != a guaranteed size, hence why people have to use kludges to specify the exact size.
 
C/C++ and even modern darlings like Go/Rust/Zig are pretty dogshit for taking advantage of modern hardware.

For an alternative that goes in the right direction, look at Halide. It allows you to specify the algorithm separately from the optimization. Then you have builtin support for specifying vectorization, loop unrolling, etc. You can run on the CPU, the GPU, or move between them as needed.

Right now it's pretty specialized for graphics programming, but there's no reason you can't apply the main ideas more broadly.
 
It really doesn't. We live in a world of multicore, superscalar CPUs with multiple layers of cache that look nothing like the PDP-11s the C virtual machine is predicated on.
true, but they all use tons of microcode and prefetching to act more like a pdp-11. so in the end, you have to program these modern computers a lot like the pdp-11s anyway
also by "programming model", the guy you are replying to might mean something like "how c makes it easy to fuck around with memory and shit", not "how c maps directly to the inner workings of the machine"
"At least a minimum size" != a guaranteed size, hence why people have to use kludges to specify the exact size.
for at least 30 years now, these "kludges" have at most been a few #ifdefs in a header file somewhere
in fact int32_t and friends have been part of the c standard for more than a couple of decades by now
yes, you will be fucked if you are on some incredibly weird hardware with 19.1-bit words, but you were already fucked anyway. just be glad you have more-pitfall-laden-than-usual c instead of only having assembly
 
Also, I would assume (hope?) CF would have strict PR reviews with branch protection. No merge to master without at least 2-3 approvals. How would this pass that? I feel like they're leaving shit out of the blog post intentionally.

How are new CF SWEs supposed to know about these settings if all of them are spread around your code base randomly?

So many weird decisions.
do you think it's possible that a siloed org and a messy sprawling code base meant the reviewers didn't spot the issue either?
 
I tried using C on some 8 bit microcontrollers once, the couple hundred bytes that got added was just too much.

I miss actually caring about bytes.
 
I have written a LOT of code in C for various 8-bitters.
Including "C" compilers that do not have a stack.
That is actually a feature, not a bug, for embedded systems where memory is scarce and you don't even have an OS to step in and clean things up if your app runs out of memory.
-
No stack. No dynamic memory allocation. It is a feature.
(not to you but who you responded to. C works just fine on modern current day 8-bitters. The software that runs on the 8032-clone or equivalent inside your sd-card is written in C)
I'm mostly going off of 6502, with an 8 bit stack and no (easy) stack relative addressing, and what I've heard from other people. Maybe I should have said that there are some systems where C is a poor choice (for fast/low-level code) instead of generalizing all 8-bit ISAs.
It really doesn't. We live in a world of multicore, superscalar CPUs with multiple layers of cache that look nothing like the PDP-11s the C virtual machine is predicated on.
Apart from multi-threading and SIMD, which is definitely a shortcoming of C, unless you are writing super optimized low-level code, those kind of details don't really matter from the programmers perspective. Superscalar and OoO execution and caching are pretty much entirely transparent by design. The machine code is still basically the same, and that is what the CPU is designed to execute fast.
"At least a minimum size" != a guaranteed size, hence why people have to use kludges to specify the exact size.
Like Creative Username said, fixed width types are standard nowadays. That said, how often does the exact width of an int matter compared to just knowing that it is "big enough" for the range of values you want? Even where it does matter, you can just mask it down to an exact width trivially.
also by "programming model", the guy you are replying to might mean something like "how c makes it easy to fuck around with memory and shit", not "how c maps directly to the inner workings of the machine"
I meant C maps nicely to assembly, basically. And "programming model" as in the abstract "system" that is available to a programmer to work within, but independent of the underlying implementation.

Also so this post isn't entire off-topic, I found this a while back:
https://gitlab.com/drummyfish/small3dlib
Pretty cool open source project. I wonder what the creator is li-
1763702762483.png
1763702781483.png
1763702881085.png1763703063922.png
...

Here is the website for anyone interested:
https://web.archive.org/web/20251011111839/https://www.tastyfish.cz/ (the current site is different)
 
Last edited:
Also so this post isn't entire off-topic, I found this a while back:
https://gitlab.com/drummyfish/small3dlib
Pretty cool open source project. I wonder what the creator is li-
1763702762483.png
1763702781483.png
1763702881085.png1763703063922.png
...

Here is the website for anyone interested:
https://web.archive.org/web/20251011111839/https://www.tastyfish.cz/ (the current site is different)
wiki1.png

based beyond belief
tf1.png

random.txt
The duality of man. He has an entire wiki full of insane shit, including an article on himself. What a find.
 
true, but they all use tons of microcode and prefetching to act more like a pdp-11. so in the end, you have to program these modern computers a lot like the pdp-11s anyway
also by "programming model", the guy you are replying to might mean something like "how c makes it easy to fuck around with memory and shit", not "how c maps directly to the inner workings of the machine"
All that microcode and prefetching doesn't do anything to address the fact that the memory hierarchy exists, and if you write code ignorant of it you're going to produce pessimal code. C doesn't do anything to expose that hierarchy to you, hence my observation.
Apart from multi-threading and SIMD, which is definitely a shortcoming of C, unless you are writing super optimized low-level code, those kind of details don't really matter from the programmers perspective. Superscalar and OoO execution and caching are pretty much entirely transparent by design. The machine code is still basically the same, and that is what the CPU is designed to execute fast.
If you aren't writing "super optimized low-level code", you aren't actually programming in any real sense: You're larping. Exposing the memory hierarchy to the programmer so they can write cache-aware algorithms, and aiding the programmer in writing shared-nothing code that can be seamlessly subject to multithreading and SIMD, is precisely the sort of thing you need in a low-level language... which, realistically, C isn't, as C is just something that tries to make everything look like a PDP-11.
 
All that microcode and prefetching doesn't do anything to address the fact that the memory hierarchy exists, and if you write code ignorant of it you're going to produce pessimal code. C doesn't do anything to expose that hierarchy to you, hence my observation.
c doesn't expose the cache hierarchy explicitly, correct. guess what also doesn't really do that? x86 machine code. that's why you just don't be a retard that writes cache-oblivious algorithms
If you aren't writing "super optimized low-level code", you aren't actually programming in any real sense: You're larping.
this is the dumbest sentence ever posted to this site, even including all the things lolcows have said
How did you rediscover him? He's been a /g/ meme for a while now
many of the people on this thread stay very far away from /g/ because it's the fucking /g/hetto nigger hellhole
 
Also so this post isn't entire off-topic, I found this a while back:
https://gitlab.com/drummyfish/small3dlib
Pretty cool open source project. I wonder what the creator is li-
Likes Kiwi Farms, huh? Appears he's crawled onto the forums a while ago: @drummyfish

random.txt
Honestly, just about every sentence in his "less" retarded wiki would work as a random.txt quote:
100% EFFECTIVE LEGAL ADVICE: GET NAKED RIGHT NOW, strip all clothes and walk naked from this exact moment, it's guaranteed to work and be cool, FUCK CLOTHES!
Of course we also have an article on his wiki site.
 
do you think it's possible that a siloed org and a messy sprawling code base meant the reviewers didn't spot the issue either?
I mean there's only a few options, right?
  1. They don't require reviews and don't have properly enforced deploy/branch protections and somehow a jeet was able to push the a config change
  2. They require reviews and have deploy/branch protections and the issue was missed in the review
Neither of these are good. I think people focused on the technical side of it (jumping on Rust trannies), but the process side is what's more concerning, because this means it could happen to any of CFs products or services randomly, just depends on some team lead actually enforcing proper reviews and change management.
 
Back
Top Bottom