Programming thread

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
the best C23 by far is #embed
feature so good it will be implemented in C++26
i fucking love #embed
TOTAL. #EMBED. LOVE.
Embedding files into C has been possible since its inception. I guess some retard niggers use build systems that can't xxd -i or kindly ask the linker to include a file.
 
I don't care about nuC. I am somewhat concerned that they'll eventually butcher the language at this rate. C is a well-established language, it doesn't need constant new introductions that most established projects aren't going to use anyway because they all need to be compatible with legacy code or niche compilers without support for the new standards. Some popular C libraries are C89. C99 hits the sweet spot for me

That's not to say there's no good new things. #embed is gemmy, _BitInt is genuinely useful for things like cryptography, and if declarations are admittedly nice

But introducing C++ish-style namespaces is plain niggerlicious. I don't personally like labelled break/continue, we have goto and goto isn't bad. Only trannies think goto is some universally demonic thing and that writing it at all makes you literally Hitler

I think they're really starting to toe the line of forgetting about C's core philosophy as a language and getting too jealous of C++. It's supposed to be a very barebones and explicit language; ranges with case 'A' ... 'Z' or int arr[] = { [0 ... 9] = 1488, [10 ... 19] = 69 }; kind of shit on that and forcing them to be part of the standard instead of compiler extensions (which they were) is retarded. I feel most of this stuff already exists as a compiler extension and should stay that way. I'm all for the standards adding new preprocessor directives or fancy integer types but some of this feels like a step too far. Watch the next standard introduce GCC's cleanup attribute and rename the language to "C+"

But that's just, like, my opinion, man
 
at the beginning of february there will be held a new WG14 meeting,
it will discuss the removal of I again
n7352 which contains this gem
Principles
- Uphold the character of the language
- Keep the language small and simple
man i wish others would follow that
apparently the standards code style is not consistent
another attempt at namespaces, this time its more sane though
and Result type that should work with languages that use C ffi
 
personally i dont mind closures, theyre already a feature in GCC for example, however i think its also too much voodoo for C
They're also possible in ANSI C without fancy compiler features with some creative use of void pointers. I do this more than I would like to admit when writing parsers and compilers.
 
They added break/continue with labels for nested loops, and scoped declaration in if, all of which I do actually like. The former is semantically stronger than existing uses of goto, and the latter while niche does have advantages over existing scoping in some cases.

Edit: Forgot to mention the case syntax, I also love that, it's a lot cleaner to do case 'A'...'Z' than stacking 26 labels, and it helps the compiler generate better jump tables
I’m not a huge fan of those, apart from the scoped if definitions. Break-continue with labels is preferred by newer languages because it plays more naturally with lexical scoping, but C has goto, it has always had goto, and much to the chagrin of language theorists it will always have goto. Labeled breaks/continues can be perfectly emulated by gotos, and for some situations gotos will still be necessary.
The case ranges introduce a small piece of syntax that doesn’t get used anywhere else. I count it as a bit of inconsistancy that C doesn’t need. If you could use the same syntax elsewhere to create ranges, for example in array declarations, that would be neat.

The scoped if declarations are awesome, and #embed is also awesome. Most of the new features are nice enough to use, but I worry the whole thing is one big case of boiling the frog. Imagine you wanted to write a completely new C compiler, would you rather target C99 or C23? One of these would be significantly more daunting than the other, and it isn’t C99. At some point, it would simply make more sense to just write a new language, which would come with the benefit of not having to deal with nigh on 50 years of backwards compatibility.

Contibuting to the earlier discussion on macros, they really seem like the big bug bear of any serious attempt to modernize C. It seems to me that in the past a lot of language features that C lacked, people implimented for themselves in the form of macros, so now when the standards committee wants to put those new features in the language itself, they have to tiptoe around the huge screaming elephant in the room in order to maintain backwards compatibility. Some people forget this, but the preprocessor is actually a seperate program that doesn’t have the ability to speak directly to the compiler, nor the compiler to it. This behaviour can’t be seriously modified either, because some people use custom preprocessors so forcing integration between the standard pp and compiler would break their stuff.
another attempt at namespaces, this time its more sane though
Compilers could simply grep all include paths for each named include.
This part is niggerlicious. Imagine requiring your compiler to parse through every header on your system to find which one has the right name every time you compile anything. What about name conflicts, also? The OS takes care of file name conflicts, how would the compiler deal with name conflicts?
The rest seems cool, though. Tying namespaces directly to headers is perfectly reasonable and very sensible. I’m kinda surprised noone thought of this before.
 
Last edited:
People don't like #embed because it make it possible, they like it because it made it simple.
The embed directive also works with the gcc makefile integration, so if you create a rule for a .bin file used in an embed, it'll get rebuilt by make
It helps with broader tooling in a way that I find appreciable
 
I found myself in possession of a surprise gift card. I want to purchase some programming books to help increase my skills in C++, or general programming concepts. The topics I would like to learn the most about are project structuring and organization, design patterns, and other advanced topics. Basically where I am at now is I can confidently write smaller, sub-500ish line programs without issue. But when I work on larger projects my lack of skill/experience in organizing my program effectively makes working on the project a giant slog to the point that I usually write myself into a corner, or the project has so many hacks (because it is) and I restart/drop it. Basically I'm well past the point of needing the "Learn C++" books, and I know the syntax well, but translating that into effective working code is the hardest part, and I would like to work past that block this year.

My current shopping cart, please tell me if either of these are total wastes of time:
SICP (Javascript or Original Lisp version?)
C++ Software Design: Design Principles and Patterns for High-Quality Software (O'Reilly book)
 
I found myself in possession of a surprise gift card. I want to purchase some programming books to help increase my skills in C++, or general programming concepts. The topics I would like to learn the most about are project structuring and organization, design patterns, and other advanced topics. Basically where I am at now is I can confidently write smaller, sub-500ish line programs without issue. But when I work on larger projects my lack of skill/experience in organizing my program effectively makes working on the project a giant slog to the point that I usually write myself into a corner, or the project has so many hacks (because it is) and I restart/drop it. Basically I'm well past the point of needing the "Learn C++" books, and I know the syntax well, but translating that into effective working code is the hardest part, and I would like to work past that block this year.

My current shopping cart, please tell me if either of these are total wastes of time:
SICP (Javascript or Original Lisp version?)
C++ Software Design: Design Principles and Patterns for High-Quality Software (O'Reilly book)
I personally swear by https://pragprog.com/titles/swdddf/domain-modeling-made-functional/
 
are their any heckin wholesome rustaceans here? I've started trying to learn rust over the last few days. I do know a bit of C. I at least know how to write things in it, but I don't know any C++ which rust seems to be much closer to. Right now I'm going through "the book" and some other recommended learning material to pick of the language itself. I'm mostly wondering if there is any advice on some learning material after I'm done with the basics.

So far rust seems in some way's easier than C. It's definitely a more complex language than C, as in it has a lot more to it. It seems like it does a lot more of the work for you. I have heard once you get into async multithreaded rust it can get really complicated, but idk I'll worry about that when the time comes.
 
Last edited:
are their any heckin wholes rustaceans here? I've started trying to learn rust over the last few days. I do know C a bit. I at least know how to write things in it, but I don't know any C++ which rust seems to be much closer to. Right now I'm going through "the book" and some other recommended learning material to pick of the language itself. I'm mostly wondering if there is any advice on some learning material after I'm done with the basics.

So far rust seems in some way's easier than C. It's definitely a more complex language than C, as in it has a lot more to it. It seems like it does a lot more of the work for you. I have heard once you get into async multithreaded rust it can get really complicated, but idk I'll worry about that when the time comes.
To me, rust seems closer to a dialect of ML than c++, mostly because of the type system.

As for learning afterwards: write something, anything with it.
 
How do you guys read documentation. I just started learning apis and the docs are so fucking dense that I reread them over and over but still dont comprehend them.
A bit late to this one, but something I've found is that when I have that sensation of something "washing over me", it's often because I don't realize there's something I'm not actually comprehending about the syntax in the examples.

It might be easy at a glance to get the feeling you don't get something but can't figure out why, but a good way to try to drill in on it is to hunker down and really pick apart the example code, piece by piece, until you run up against something you don't clearly understand. If you can get to the bottom of that, you might be able to unpack what's going on.

I know what I'm saying is retardedly simple, like "yeah, no shit", but every time something like this happens to me, it's typically because I don't realize in my skimming that there's some function or aspect of the syntax that I don't realize I'm not properly comprehending until I take it step by step like that.

Of course, if your problem is a little more macro-level, like the ideas themselves being too esoteric/abstract, it helps to find examples and relevant problems to use as anchors to dive in with. Someone else got some scattered reacts for suggesting LLMs, but so long as you don't use them to do your thinking for you, they can be useful as disambiguators and pointers in the right direction. Just gotta keep them at the right arm's length.



Back to the thread: I've gotten back to it pretty well these last few weeks. I'm doing good learning, but I also have kind of a paranoid fear about what it is I'm going to forget without realizing it. Just gotta keep it up until it becomes intuitive, but on the upswing, I feel like I have a lot more reasons behind the structure of my code than I did a few months ago, and consequently everything seems to have a much better sense of place than it did before. Architecture is fun, but it's also easy to lose a lot of time on, and I can definitely see why people get the sense that OOP is kinda esoteric and up its own ass sometimes.

I think I should probably be a little more open in this thread in general, but I don't know what I don't know and I always worry that the super wizards in this thread will bite my head off for professing to some stupidity that I don't even realize is stupid. That's a me problem, though, I don't know why I'm so anxious about it because usually when I open up I get drowned in feels reacts. This has been a remarkably pleasant thread to participate in during my time in it.
 
also speaking about the proposals
the fucking tranny is trying to remove I REEEEEEEE

nvm its actually a good proposal since in c23 you can postfix a number literal with an i to get an imaginary value
This is some of the dumbest shit that I've read in a while. It's trivial to just use a different name (and you probably should). I can understand removing something like gets, but what actual benefit is there to get from removing I? If you really, really, need to use I then you can just #undef I and #define I _Complex_I when you're done.

(And who writes template <typename I>??? If you start at T and manage to get all the way to I you've gone wrong somewhere. Especially if you're writing C!)
 
I'm trying to build a GUI for a program I've been writing. I figured I'd use QT because it seems simple enough, it's free, I have an OK understanding of how the API works. Now, most of my business logic is actually written in C, mainly because it deals heavily with low-level C libraries, but it seems simple enough to call from C++. Right?

Well, no, as it turns out.

Code:
int main(int argc, const char *argv[]) {
        handle_arguments(argc, argv);
        initialize_fork(argv[0]);
        sleep(1); // Or do_whatever(); doesn't matter
        terminate_fork();
}

This is a paraphrase of the minimal code I've been banging my head against for a while now. What initialize_fork() does is creates a Unix socket-pair, dupes them to predefined FD numbers + closes the originals, then forks + execs. Communication between the main process and a fork happens across these file descriptors. terminate_fork() simply sends a message instructing the fork to kill itself (and its own children, if it has any). If I compile this "normally" with GCC or G++, everything works as expected. But as soon as I link QT6, without even writing QT code, I get an error at terminate_fork()'s calling of sendmsg() to the child process; perror prints "Bad file descriptor." Strangely, I can communicate over the file descriptor fine if the communication happens before the program is about to end (say where sleep(1) is); it's just trying to communicate when the program is about to exit that runs into this problem.My only guess is that QT is somehow closing the socket or marking it as unusable as soon as it detects it's about to end and causing a race condition between its cleanup code and sendmsg(), which baffles my mind -- where is it getting a list of open file descriptors it had no part in creating, and why is it then deciding to close them? Would anyone know if there's a way to make QT keep its hands off random file descriptors it has no business knowing about? Or am I being retarded and something else is going on? Either way, I'd really rather avoid rewriting the C code/anything not in my main function as much as possible, which is why I'm not using QT's dedicated socket type and handlers in the first place. I'd appreciate any insight.

I suspect what I'll have to do is send a SIGTERM to the child and use a signal handler to clean up, but I'd at least like to figure out why the code I have isn't working, and if there's any way to wrangle QT into compliance.
 
Last edited:

I finally got around to watching this episode of the standup that came out about a month ago. I will usually watch them when I get bored. I thought this one was actually pretty good. Mostly because it was Casey Miratori, and Jonathan Blow talking about things. I'm glad they had Casey on for this one, because I don't think any of the web devs could have done any of the heavy lifting in the show. (although I guess TJ works on neovim at least, so it's something outside of web development.).
 

A Cornell TA (adjunct?) decided to get several coding agents to follow along in an honors freshmen CS course for the Fall semester in 2025, and grade them as if they were actual students.

Final Grades -

Gemini: C+
Claude: C+
ChatGPT: B+

From the conclusion (emphasis mine): "The computer science major here requires a 2.5 grade average across both the OOP data structures course and the discrete math course before a student may affiliate. A C+ is a 2.3. As such, neither Gemini nor Claude would be able to affiliate as a CS major with this grade unless they also performed significantly better in the discrete mathematics course. [...] All that is to say, despite the fact that earning a B in a Cornell class is itself a very impressive feat for any real student, every single AI model scored below median in this class. If you haven't noticed yet, this is a really hard class. And for first semester students fresh out of high school, tackling their first ever university semester, even making it to the end of this marathon of a course is an accomplishment worth being proud of. But for these models with trillions of dollars worth of hype behind them, claiming earnestly to be PhD level experts, I cannot help but point out that they lost to most of the freshmen we taught."
 
Just listened to the latest MATI episode, where Null says he coded a custom webserver that will be replacing the site's reverse proxy. That's interesting, almost every coder I know writes a toy webserver as part of learning a new language, but that never gets near production. (I've got two sitting on my current machine.) The challenge is implementing 30 years worth of security lessons and edge cases before you give up and go do the productive thing you were originally studying to do.

Formally requesting @Null to give us some details on the project. I'm sure there's some secret sauce that can't be talked about for security reasons, but there's got to be some interesting paths going down that story.
 
But as soon as I link QT6, without even writing QT code, I get an error at terminate_fork()'s calling of sendmsg() to the child process; perror prints "Bad file descriptor."
Note: I personally haven't worked with QT so perhaps my advice is worthless.

A wild guess is that you're using a function somewhere in your code that is named the same as some random QT function, so the QT function gets linked over some glibc function you thought was safe. There is an easy way to verify this but I can't quite recall it; probably involves looking at the dynamic symbol table and seeing if which functions have been linked to the QT library instead of glibc (or something along those lines). This seems like the most obvious reason why just linking the library would fuck up your program.

Alternatively QT probably does a lot of start up stuff before the program even hits main(), glibc style. GDB probably has some sort of feature that allows you to watch file descriptors somehow so that would probably come in handy.

Worst come worst you could always strace the program and see what syscalls it's making. That'll tell you where exactly the fds are getting closed. Again, GDB probably has a feature that lets you do this but I don't know what command it'd be.

This is all just me spitballing, though. Just thought I'd chip in seeing as nobody has yet.
 

Maybe it's just me, but choosing react/typescript for a tui seems like such a bad choice. I feel like the only proper way to do something like this is using something like c, c++, go, rust etc. Using those doesn't mean you will automatically have good performance, but all of the best performing tui's I use are written in a C like language.

One of the few that people like that isn't is ranger, and ranger itself shows cracks because of the fact it's written in python. If you go into large directories with 1000s of files, ranger can completely come to a half as it tries processing everything. I've had ranger completely freeze plenty of times using it. Meanwhile lf, and yazi just don't suffer from that same issue.

Edit to avoid double posting.


A macro thag outputs 47mb of code. Damn, that would suck to deal with
 
Last edited:
Back
Top Bottom