Programming thread

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
What do you niggas think of Nim? I recently became more acquainted with it after trying to spin up my own nitter instance on a spare VPS of mine.

They say they want to be as performant as C without as much of the safety risk. I haven't looked into benchmarks to see how they stack up, but they have some interesting ideas in the language's design.

Edit: c2nim looks really cool too.
 
  • Like
Reactions: Sperg Coalition
At roughly 7:43, he goes on about signing. While it's true that driver signing turnaround can be a while, this is where things like test signed drivers come into play. You can easily pass a test signed driver to QA they run it in test mode and still follow the AGILE doctrine.
While that's being tested the production driver could easily be simultaneously be signed


That way supposing testing passes then you should have your production signed driver ready to deploy in a release.
I'm fairly sure WHQL is useless. I think it's supposed to be there so you can boast about it in your marketing materials or whatever, but in reality no1curr.

I watched that video, there's a number of things wrong with it:
  • User mode is ring 3, not ring 1
  • glibc on Windows??
  • Syscalls cause exceptions, which get picked up by a kernel thread - this isn't how monolithic kernels work at all. I don't know what he was getting at here. It sounds a little like how microkernels operate (which Windows isn't).
 
I mean we just had one of the largest outages of all time last week due to an error that should have been easily caught, so we cant assume pajeets will thoroughly test things. An extra layer of security isn't a bad thing, even if its overstated. If rust was run by competent engineers who just focused on the code instead of a bunch of insufferable freaks it would probably do a solid job of addressing the problem it sets out to solve. It just gets associated with creepy sex pests, and is sold like a religion to be used in places it isn't needed.
But how does forcing in a language fundamentally not designed for the windows kernel make things safer or more secure, i'd only agree it was built from the ground up for it. The legacy API function prototypes and structs ultimately constrain what they can actually do; which are fundamentally 'unsafe'.

There's plenty of things where I can imagine shit just fundamentally not working, least of which are the support macros.


I'd much rather see Microsoft officially support c++ in the kernel bar the exceptions and STL shit obviously. Right now I have to "hack in" c++ by providing a global override of new and delete among some linking directives to support global ctors.
 
Last edited:
What do you niggas think of Nim? I recently became more acquainted with it after trying to spin up my own nitter instance on a spare VPS of mine.
I honestly do not like it's Python inspired syntax (significant whitespaces mainly) but seems really powerful and the macros are the most interesting feature, taking inspiration from Common Lisp and Scheme's way of handling this kind of stuff.
EDIT: The FFI is pretty easy to use and c2nim does facilitate translation of C headers.
 
I'll say learning React is kind of eh. I'd advise agaisnt learning React if you are doing it to try and make your Resume look better.

This goes for Resume building and general advice. If you have free time, try and build something you love and if you can't try and build something you think is kind of neat or cool.
I spent the past two days fucking around with react and building out a few small projects for practice. Its actually not too bad so I'm glad I learned it. I talked with my client and they clarified more what they actually wanted with their website so I set up react because its way more useful for changes to the DOM than vanilla javascript.

Also, I do make my own projects for fun and that is the best way to get better at coding imo.
 
- Python debugging misery -
There's always the option of sprinkling a ton of breakpoints along (what you think) the code's execution path is and going from there. The VS Code kiddies I find myself surrounded with these days think import pdb; pdb.set_trace() (or ipdb if you wanna be fancy, which you should) is literal magic.

Regarding type annotation crap: every Python project I've worked on that actually sort of saw them as a chore. And a lot of the time they end up having return types like Union[List[str], Dict[str, List[str]] which is just retarded, especially if you've written more than like 50 lines of a static language and know how insane that is.

Yeah, that's the thing, I've used Python on and off for years. I've always liked it better than Javascript and thought it had potential, but it's the kind of language that you really need to have discipline to keep the code quality up. Because the language itself won't offer much help.

I pick it up for a project every few years and discover new things I like, new things I hate, and a bunch of new nonsense in the community.
Thing about the Python community is that it's gotten older, boring, and corporate, which is fine. Every JS-related conference and meetup I've been to made me feel old, even when I was like, 28. I know that old JS coders exist, but every old programmer I know defaults to like, Python or C or Clojure or something that isn't as stupidly fast-moving.

I do want to diversify so I'm not a pure Python autist, but it's not especially high priority to me since I can flex some semi-obscure Python knowledge and get a decent job. At some point in the next couple months I'll figure out an idea for a project that's sufficiently interesting to me to see it through and do it in C or something. I did that with Haskell a little while back and found it enjoyable. I would also like to do something in OCaml, since any Haskell project I've done that's more than a couple hundred lines runs into a problem that's trivial to solve with a for-loop, but really annoying when you can't just write one, and I've seen you praise it plenty here. Life is long and I'm not in a rush.
 
  • Like
  • Agree
Reactions: y a t s and Marvin
Try it, worst case scenario you spent some time gaining valuable insight on why rust is shit.
I wanted to once, but I hit a shitty roadblock before I could even open my text editor.
And a lot of the time they end up having return types like Union[List[str], Dict[str, List[str]] which is just retarded, especially if you've written more than like 50 lines of a static language and know how insane that is.
this is why the white man's languages have a keyword called struct
 
Oh! And as I write this, I just noticed something. I've been testing this with $ curl -v http://localhost:8000/heknowsclydecash -o /dev/null. I just got rid of the -o, and turns out, a clue! Curl's now spitting out 'HttpResponseNotFound' object has no attribute 'data'.

At first when I saw that, I felt relief. I thought, as gay as this journey has been, at long last I have something I can grep for or google for or something.

And after trying to dig that up for like 15 more minutes or so... I have nothing. I tried a few other things. I tried custom 404 handler, custom 500 handler (no clue if I'm configuring any of these properly), I'm done with this.
This looks like a html response for 404 is being forced into DRF stuff where it doesn't belong.

HttpResponseNotFound is not an error, it's a html response class or object. You can use it in your code to return from a view. Django returns an object of that class when it catches a 404 error.

django.views.defaults.page_not_found
django.core.handlers.exception.response_for_exception
^ these things in django generate a 404 html page. Also, look for usages of HttpResponseNotFound in the project's code. But if you're requesting a nonexistent http://localhost:8000/heknowsclydecash with settings.DEBUG on, you're probably hitting django.views.debug.technical_404_response by way of response_for_exception when the url resolver tries all registered patterns and fails to find a matching view.

DRF responses (rest_framework.response.Response) have a .data attribute. HttpResponseNotFound is not a DRF response. Some custom niggerfaggotry, probably a middleware, in the code tries to access .data on the response object. (It's not DRF doing it out of the box, you can return a html response from a DRF view, nothing will break.)

Middlewares are listed in settings (settings.MIDDLEWARE). Middlewares are a stack of classes/functions which process requests and responses: they call one another in order on the request and are exited in reverse order on the response. A django request is accepted, passes through the layers of middleware, is routed to a view (or not routed, in which case a 404 is raised), the view returns a response or raises an error (if it does, the handler catches the error and produces a response), and that response passes back out of middleware.

Exception handlers catch errors raised by views inside the layers of middleware. If an error occurs in middleware, it will not be caught by the handler.
Exception handlers are not python loggers. Whatever produces your log record doesn't look like a python logger either.

If you can use a debugger with breakpoints, put a breakpoint in response_for_exception, look through the call stack and see where the fuckery happens. If not (why not?), look through the stack of middleware (settings.MIDDLEWARE). Each line is a namespaced class/function. Find them in the code and see if they try to access `.data` on the response. Pay special attention to a class/function that's not part of django (doesn't start with `django.`), those are the most likely culprits:
Python:
MIDDLEWARE = [
    "django.middleware.security.SecurityMiddleware",
    "django.contrib.sessions.middleware.SessionMiddleware",
    "django.middleware.common.CommonMiddleware",
    "django.middleware.csrf.CsrfViewMiddleware",
    "django.contrib.auth.middleware.AuthenticationMiddleware",
    "django.contrib.messages.middleware.MessageMiddleware",
    "django.middleware.clickjacking.XFrameOptionsMiddleware",
    "clydecash.middleware.DatabaseRequestCounterMiddleware",  # <--!!!!!!!
]


{"request": "GET /heknowsclydecash", "user_agent": "curl/8.8.0", "event": "request_started", "level": "info"}
This doesn't look like python's logger but some external stuff (sentry?). plz poast your django settings.LOGGING dict if any.

Don't run django's DEBUG=True on production.
(django's DEBUG setting has nothing to do with logging.DEBUG, unless a developer made settings.LOGGING depend on django's DEBUG, you can log DEBUG severity level messages in production just fine.)
 
Last edited:
But how does forcing in a language fundamentally not designed for the windows kernel make things safer or more secure, i'd only agree it was built from the ground up for it. The legacy API function prototypes and structs ultimately constrain what they can actually do; which are fundamentally 'unsafe'.
It doesn't make things safer or more secure when you are including shit like serde (provides serialization primitives, is quite big), all you need is one state sponsored chink to hide some bad shit in there and mao xinping gets privileged execution rights on every 'doze box.
 
It doesn't make things safer or more secure when you are including shit like serde (provides serialization primitives, is quite big), all you need is one state sponsored chink to hide some bad shit in there and mao xinping gets privileged execution rights on every 'doze box.
it's kind of amazing how this doesn't happen more often because a chink can compromise some shitdev who made a library to check if numbers are even and then fuck everything up instantly after submitting 1 update that gets auto included in all downstream builds

that's why i like monolithic system package managers that have maintainer-signed shit because while not perfect, they can help prevent supply-chain library fuckery from happening due to a single point of failure being compromised as there is more than exactly 1 person responsible for instantly including arbitrary code in every downstream application

we also need simpler software in general with less retarded dependencies so that this happens less (xz type shit can always happen)

maybe also a way to easily sandbox libraries somehow but that's :optimistic: because nobody will ever really get behind it
 
Regarding type annotation crap: every Python project I've worked on that actually sort of saw them as a chore. And a lot of the time they end up having return types like Union[List[str], Dict[str, List[str]] which is just retarded, especially if you've written more than like 50 lines of a static language and know how insane that is.
You can name the types, it makes things slightly less retarded.

Union[List[str], Dict[str, List[str]] smells regardless of the annotation, a function has no business returning either List[str] or Dict[str, List[str]]. In the extremely rare cases I do need to return one or the other from the same function (like one is the successful result and the other is errors), I write an error class and raise an instance of it with the data in an attribute.

Tuple[List[str], Dict[str, List[str]], though? Yeah that happens.

this is why the white man's languages have a keyword called struct
Python has "dataclasses" now. They're structs but gay.
 
It boggles my mind that it caught on for machine learning people or that anyone thinks it's a suitable language to write a production system with.
Python is at a point where it's indistinguishable from psudocode. If it looks like it should work, it almost certainly will in Python.

Speaking from experience, ML devs' talents don't typically lie in the programming side of things. Their shit is borderline unreadable at best. As for production systems, they often start as a Python thing, catch on and start making the creators a profit, then real men have to rewrite it in a more performant language like C.
 
python is perl 2, people use it to duct tape together massive shitballs of logic that they don't care about enough to (c)hisel into stone
only real difference from perl is that it's slower and looks like pseudocode* instead of random punctuation characters shit out everywhere

*the bad kind of pseudocode (the kind that doesn't look much like c)
 
Dave really makes me reconsider Total Boomer Death. What is it with pretty much every tech Gray Beard being lovable, humble, and generally appreciative of the opportunities they were given through the dotcom boom or the starting point of monolithic companies like Microsoft, completely the polar opposite of the greedy meritless boomer stereotypes? Is it just because you couldn't lie and BS your way to the top of that field?
They probably see the writing on the wall concerning AI and the probability that it will usurp a LOT of programming jobs. They may feel that they were fortunate to have lived and worked through the golden era of human-generated code.
 
They probably see the writing on the wall concerning AI and the probability that it will usurp a LOT of programming jobs. They may feel that they were fortunate to have lived and worked through the golden era of human-generated code.
AI sperg warning:

Unless you are truly a niggerlicious developer AI will never reach the level to replace us. Ask it to write anything remotely complex or where it doesn't have enough data in it's training set on and it just falls over and/or lies to you.


Honestly I think AI will get worse as it develops as training data becomes hallucinated shit previous AI models have spit out.

The whole "AI will replace us" is basically the new "we will live in the metaverse"

I do agree however we truly have left the golden age of programming when new hires don't understand what a pointer is or suggest we replace our GDI apps with some gay webshit framework
 
Last edited:
in the 2026 news: "large company eats shit because chatgpt 4.5.993+ (high quality version with extra hallucination protection 3 & knuckles) elided an authentication check and inserted a comment that began: 'as a large language model, it is against my safety guidelines to create code that prevents certain people from accessing...' in the ensuing aftermath, billions of dollars worth of damages have been incurred"
 
I like that the only good way to use numpy is as a wrapper for C. It's a culmination of dumbing down coding and standards once enough processing power was reached and it was tine to fire good engineers and replace them with the shittiest of Pajeets.
I would agree with this except for one thing. Numpy very much encourages a procedural style which Pajeets tend to hate. Also Numpy tends to break operations in seperate functions which help Newbs learn to think about coding. Once I realized coding was about figuring out what need to happen and then figuring out how to get there I learned so much more.
Its not a faang mindset. Its the reality of growing your business from startup to success.
I agree completely in regards to startups. But a lot of tech companies build themselves into major corporations and then spend ungodly amounts of money because what they built is a massive piece of shit they have to rework. Look at the massive problems Facebook had with PHP. Or the fact Netflix rewrote networking in C++ because they spend so much money on networking.
It boggles my mind that it caught on for machine learning people or that anyone thinks it's a suitable language to write a production system with.
Tbf all of the actual slow bit of machine learning is about Tensor operations, which are written in C usually and called into Python. And whatever code you have to write in order to set up the machine learning is going to be less than 0.1% of your runtime.
ML devs' talents don't typically lie in the programming side of things. Their shit is borderline unreadable at best. As for production systems, they often start as a Python thing, catch on and start making the creators a profit, then real men have to rewrite it in a more performant language like C.
Eh Unless you are being retarded and overfitting your data you shouldn't be rewriting your algorithms. Machine learning is all about the data you feed into it and not about the model as once you find the right model it is the right model and there isn't much to do, because again overfitting is bad.
"Unsafe" is the only way to make Rust performant in the way it claims. And even then its not actually performant enough. For example, Rust does array bounds checking whenever you fuck with an array.
Yeah this is something you can only understand if you look at the assembly code underneath which most people haven't done.
This is simply not true. There is nothing inherent about unsafe that increases perf, plus on most benchmarks rust is competitive with C++.
This is not true just due to the fact the unsafe requires no extra checks and just runs. There are ways to make C code a bit safer that doesn't hurt performance. But Rust does not try to be a little bit "safer" it massively enforces it.
But how does forcing in a language fundamentally not designed for the windows kernel make things safer or more secure, i'd only agree it was built from the ground up for it. The legacy API function prototypes and structs ultimately constrain what they can actually do; which are fundamentally 'unsafe'.
This is the question I'd ask Rust people. If safety is so important why wouldn't you just use ADA? It has existed for a while. The real actual answer is basically hype and marketing. Which is a real thing if you are a manager for a project and you can't get programmers to use ADA but you can get people to write Rust. But as a programmer there isn't a meaningful reason why Rust is really that much better than ADA.
 
Tbf all of the actual slow bit of machine learning is about Tensor operations, which are written in C usually and called into Python. And whatever code you have to write in order to set up the machine learning is going to be less than 0.1% of your runtime.
I just hate that the platform for the Tensor operations is Python, because it means creating spaghetti around your ML. TensorFlow, Pytorch, and all the other DSLs for ML (they basically are DSLs) that have been put on top of Python are pretty nice. NumPy is also nice. It's just the Python part that sucks.
 
  • Agree
Reactions: 306h4Ge5eJUJ
DRF responses (rest_framework.response.Response) have a .data attribute. HttpResponseNotFound is not a DRF response. Some custom niggerfaggotry, probably a middleware, in the code tries to access .data on the response object. (It's not DRF doing it out of the box, you can return a html response from a DRF view, nothing will break.)

Middlewares are listed in settings (settings.MIDDLEWARE). Middlewares are a stack of classes/functions which process requests and responses: they call one another in order on the request and are exited in reverse order on the response. A django request is accepted, passes through the layers of middleware, is routed to a view (or not routed, in which case a 404 is raised), the view returns a response or raises an error (if it does, the handler catches the error and produces a response), and that response passes back out of middleware.
Yeah, exactly this.

He was doing something fucky in one of his custom middlewares and thus it was bypassing the DEBUG_PROPAGATE_EXCEPTIONS thing.

Anyway, he had it fixed pretty quickly. Our boss had something else he wanted me to do in the meanwhile anyway. (Unfortunately my new task is node shit, so it's out of the pan and into the fire with stupid, pajeet code issues.)

I'll keep this in mind next time I'm debugging this.

And yeah, I'll need to start messing with python's debugger. I've never been a big IDE guy. For the majority of my work, I just use a text editor and the repl.
This doesn't look like python's logger but some external stuff (sentry?). plz poast your django settings.LOGGING dict if any.

Don't run django's DEBUG=True on production.
(django's DEBUG setting has nothing to do with logging.DEBUG, unless a developer made settings.LOGGING depend on django's DEBUG, you can log DEBUG severity level messages in production just fine.)
Oh I just tore out his initial LOGGING variable and put in some very simple defaults.

Gotcha re: DEBUG. I read some documentation somewhere that DEBUG should be treated carefully. I don't remember exactly what it does, but I was pretty much trying everything.
in the 2026 news: "large company eats shit because chatgpt 4.5.993+ (high quality version with extra hallucination protection 3 & knuckles) elided an authentication check and inserted a comment that began: 'as a large language model, it is against my safety guidelines to create code that prevents certain people from accessing...' in the ensuing aftermath, billions of dollars worth of damages have been incurred"
Exactly this.

One of these days we're going to see the programming equivalent of those lawyers that let chatgpt compose briefs and started citing nonexistent cases.

I post this article a lot, but this is a really good read about someone trying to use AI to code. Basically, the AI gets close to the solution. It seems to parse the request and cobbles together some results that sorta approach a solution, except it never quite gets there. And more disturbingly, the AI never gives up and says "welp, I can't answer your question". It just keeps throwing flawed solutions at the author.

So how's this going to play out when pajeets start trying to implement, as you note, authentication logic using AI nonsense, except they're not careful enough to inspect the output and check it for bugs? They wouldn't even pick up that the logic is wrong.

Or even just simple algorithms that have horrible inefficiencies baked in. The inefficiencies don't show up in testing at a small scale, but it blows up with exponential complexity in production.

You need human intelligence to examine the client's problem and map it intelligently to the computer's capabilities. That's not really able to be automated.
 
Back