Programming thread

stupid niggers wasting resources are a problem in every language, and performance is something you may need to trade off sometimes to have nice code
i believe most c++ programmers aren't using arenas, they just rely on malloc and its O(>>1) free lists every time they need some memory. also, they like using objects instead of laying everything out in SoA format for maximum cache speed. and there's nothing wrong with that! computers are pretty fast

would you please elaborate? this sounds interesting
i actually did some research because i thought this was interestign as well apparently __slots__ is a means of writing python code that's more memory efficient since python doesn't create a dict for it which is a means of adding class attributes during runtime. didnt know this was a thing and needless to say ill be implementing this from now on.
Heres some more info i found:
 
would you please elaborate? this sounds interesting
__slots__ in Python behaves a lot more like a C struct than a conventional Python object. By default, new style classes in Python, (new style meaning inheriting from object, these things are decades old), utilize a mapping under the hood to track class attributes. By default for the last while, this is a flavor of OrderedDict. You can introspect this by creating a metaclass and hooking into __prepare__.

Python:
In [1]: from sys import getsizeof

In [2]: class A:
   ...:     def __init__(self, x: int, y: str):
   ...:         self.x = x
   ...:         self.y = y
   ...:

In [3]: class B:
   ...:     __slots__ = ("x", "y")
   ...:
   ...:     def __init__(self, x: int, y: str):
   ...:         self.x = x
   ...:         self.y = y
   ...:

In [4]: a, b = A(1, "2"), B(1, "2")

In [5]: getsizeof(a), getsizeof(b)
Out[5]: (48, 48)

Looks fine, right? What's the point?
Well, this is just accounting for the object, header, if we do this:

Python:
In [6]: (getsizeof(a), getsizeof(a.__dict__)), getsizeof(b)
Out[6]: ((48, 296), 48)

We get a vastly different result.
__slots__ trades the dynamic attribute access you get with __dict__, (you can't set or get arbitrary fields, it's fixed), as well as stuff like __weakref__, but it provides you a smaller memory footprint, and it actually preserves memory locality for the object, so rather than doing dynamic reference resolution, under the hood, it's just using memory offsets.
 
Just a sidenote, with @dataclass, you can provide slots=True as an argument to use them, and typing.NamedTuple strictly requires them, so for the former, you can marginally improve performance just by adding a flag.
 
1749171366705.webp
ended up implementing it, seemed like a simple improvement to make :)
 
Garbage collection, memory leaks, memory allocation delays,
these all sound like the follies of heap based memory management.

You must return to stack based memory management. A world where everything has its place in the world with no ambiguity.
 
Garbage collection, memory leaks, memory allocation delays,
these all sound like the follies of heap based memory management.

You must return to stack based memory management. A world where everything has its place in the world with no ambiguity.
RETVRN man alloca
 
You don't need hipster languages to allocate memory how God intended! Just good old fashioned C, or better yet its Holy successor.
then you have to either use the somewhat unportable and footgun-laden alloca or to just say char buf[1000]; // GOOD ENOUG
God bless Suckless devs. Got sick of launching screen to have a buffer.

My choices are: Ringbuffer, Reflow, Shift+MouseWheel, AltScreen.
suckless is neat but you could also just say fuck it and use the random monstrosity that fucks up the least
 
Suckless is far too linux-centric. It sucks.
do you mean that in terms of "nice glibc extended non-posix mystery syscall" or are you...
no that's a horrible thought i'm just going to assume you are bsd chad :)
Alloca, my favorite footgun! I've yet to see a good use case for it.
theoretically it's good if you want to allocate some kind of small buffer but don't know how big it is ahead of time (because malloc isn't free and to be standards compliant you need to check the result every time you allocate to maintain memory safety)
 
do you mean that in terms of "nice glibc extended non-posix mystery syscall" or are you...
no that's a horrible thought i'm just going to assume you are bsd chad :)
I am a portability retard, I hate it when systems use stuff that is uncessesarily broken at a base level, "oh yeah we don't need these extra features! *5 minutes later* why is my system not working correctly?!"

Got nothing (much) against funny nonstandard mysteries though, lots of fun things in both GNU and BSD, that's how innovation happens after all.

The """strict""" POSIX-only niggers are awful too, biggest example that comes to mind is musl and its loony goon squad of troons and freaks. :cryblood:
 
I am a portability retard, I hate it when systems use stuff that is uncessesarily broken at a base level, "oh yeah we don't need these extra features! *5 minutes later* why is my system not working correctly?!"

Got nothing (much) against funny nonstandard mysteries though, lots of fun things in both GNU and BSD, that's how innovation happens after all.

The """strict""" POSIX-only niggers are awful too, biggest example that comes to mind is musl and its loony goon squad of troons and freaks. :cryblood:
oh so if i understand correctly you have a more balanced viewpoint (you hate both the retards that assume you're using latest everything from git and the retards who refuse to use anything that isn't in some gay standard)
it would be nice if you elaborated a little further
 
oh so if i understand correctly you have a more balanced viewpoint (you hate both the retards that assume you're using latest everything from git and the retards who refuse to use anything that isn't in some gay standard)
it would be nice if you elaborated a little further
You're spot on. Both extremes are retarded

Edit: retarded implementations that either only conform to POSIX or partially so, or don't conform to anything are a huge pain to have to deal with.
 
Last edited:
You're spot on. Both extremes are retarded
there's a balance between being extremely paranoid about portability and making a slow crippled program and using hyperspecific api that only exists on last week's linux kernel and you can make tradeoffs on this balance using the power of #ifdef
iirc suckless niggers are allergic to implementing the same thing more than 1 time with different tradeoffs
i personally don't like suckless because i think compile-time configuration instead of runtime is fucking gay and retarded and stupid
"reading a file at startup is le wasteful time to invoke the entirety of gcc and all of its baroque complexity every time i change my config because it definitely is better over the long run"
you know what i feel like suckless people are sort of larpers when it comes to software complexity, they don't have any super involved projects to make a suckless version of some incredibly retarded and overcomplicated shit
except maybe their unicode library that i don't know that much about
 
iirc suckless niggers are allergic to implementing the same thing more than 1 time with different tradeoffs
I have never seen a single project that says: "these are the tradeoffs we made; if this is not what you need, the following are the alternatives".
Autists seem allergic to admitting that something is not the GOAT™©. Great example: Rust.
 
Back