Programming thread

Here's a really great paper about compilers. It's about writing a Scheme native code compiler in Scheme. It expands a subset of Scheme into assembly.

I read it years ago and I just rediscovered it the other day. It's a pleasure to read, and it's even better if you follow along implementing your own compiler as you go (including approaching things differently from how the paper does it).

It is great at demystifying compilers. You don't need to have super specialized knowledge. Just be able to read Scheme and it otherwise lays everything else out.
 

Attachments

Does anyone know how to speed up compile/linkage times? I work on a 300 file project and after introducing templates the compile times rose around 10 times (and if I compile on release it can take over 5 minutes to compile). The code itself is divided into multiple module libraries that get linked to the manager module executable.
The slowdown is specifically near the end of the compilation time when compiling and linking the executable.
 
Last edited:
LMFAO.JPG

Fuck copilot
 
Does anyone know how to speed up compile/linkage times? I work on a 300 file project and after introducing templates the compile times rose around 10 times (and if I compile on release it can take over 5 minutes to compile). The code itself is divided into multiple module libraries that get linked to the manager module executable.
The slowdown is specifically near the end of the compilation time when compiling and linking the executable.

I'm trying to remember what it's called. Template instantiation or template specialization, or something to that effect.
The idea is you define the template class in a header file, and implement it. Then you can specify, in a source file, the template class for particular types. Those types are instantiated in the object file, and can be linked, instead of having to be re-compiled on inclusion.

Something like this:
template void example_function<double>(myarray<double> *in, myarray<long> *index, myarray<double> *out); in a single source file
for some templated example_function<typename T>(...) in the header file

This instantiates the double verison of the template function. It may speed up compilation.

The other thing you can do is write a script that only compiles a source file if a retained object file is older than the source or any of the included headers. I have some python scripts that do this. Check file modified times, check for #includes and track them down, check those mtimes.
 
Any thoughts here about Elixir or Erlang for writing a DDoS resistant backend.

I have never done either, but it seems like this might be the case. They are both a bit obscure though. Has anyone here worked with them?
There's nothing inherent about elixir or erlang that makes it more resistant to ddos. All your ddos protection will be before it ever reaches the elixir portion of your code.

I've worked with elixir for a bit. It's basically a less mature version of functional rails. Some of the people who contributed to rails wrote Phoenix.
 
will AI destroy this industry
Nah.

You can get copilot to write alot of code for you but it doesnt do much good of you dont understand code.

Its really fucking helpful as a teacher. I mostly just ask ai to explain things to me and its saves alot of time reading docs.
 
  • Like
Reactions: Taxidermied Rat
There's nothing inherent about elixir or erlang that makes it more resistant to ddos. All your ddos protection will be before it ever reaches the elixir portion of your code.
I believe it is because of the inherent decentralized nature of the Erlang VM. Erlang can have a process die and still run fine. Blocking DDoS traffic before it even becomes an issue is nice and good and is basically what you would want to do if you could. However sometimes the DDoS gets through a bit and due to it being decentralized and just an extremely reliable thing to write something in it seems to be more resistant.
will AI destroy this industry
No this may be a controversial take but I think using an AI to write code is an incredibly bad though incrementally bad idea. What do I mean by that? Well AI seems like this hot new thing, but in a lot of ways it actually really isn't. Writing a good Javascript stack is really hard and most importantly expensive in terms of man-hours. What was the solution? They got a bunch of frameworks and npm packages and a bunch of special stuff that adds complexity to the language. So you have most developers who don't have to write the entirety of the stack and basically borrow what someone else wrote and use it to fix their issues. This is a Faustian bargain though because using stuff like an outside framework or an absurd amount of npm packages means the developers in charge of it can't understand it because it is way too complex. There is a sort of exception to this which is that people who have been around for 15 years or so doing web development have seen stuff be added slowly enough they can sometimes understand a bit.

How does this relate to AI and programming? AI for programming is the same exact type of thing, except no one can understand it because it is order of magnitude more complex than Javascript frameworks, as to be so complex that no human can truly understand all of the AI even if they trained it.

To distill all of this into a sentence or two: Modern software development likes to try and remove as much human labor from the process as possible to the point they shoot themselves in the foot by adding so much complexity, Programming AI would be a further iteration upon that type of decision. Modern Software doesn't need massively more mangled code that barely works it needs clear and focused attention by humans who can make clear and correct decisions.

I also could bring up the fact that Devin is like 6% accurate and that AI models have reached a plateau that more data cannot solve.
 
Blocking DDoS traffic before it even becomes an issue is nice and good and is basically what you would want to do if you could. However sometimes the DDoS gets through a bit and due to it being decentralized and just an extremely reliable thing to write something in it seems to be more resistant.
If a component as currently written can handle 1gbps of requests (however you want to measure that, let's say a combination of the raw request data going over the wire, then parsing that, so any incidental memory allocations, plus outgoing database requests, etc, etc), maybe a more clever multiprocessing scheme that's highly parallelized like erlang or whatever might give you a 10% boost so your component can handle 1.1gbps of data.

That won't do shit if the DDOS is blasting you with 5gbps.

And that's the nature of basically every serious DDOS attack that you can't handle on your own as an amateur and why services like Cloudflare and Azure's DDOS protection exist.
 
maybe a more clever multiprocessing scheme that's highly parallelized like erlang or whatever might give you a 10% boost so your component can handle 1.1gbps of data.
I can see this being the case. Decentralization is the classic way to defend against unknown security threats. It's why terrorists operate in cells and why the Roman Empire eventually collapsed into feudalism.

But the effect may not be of sufficient magnitude to actually be worth it. I'd love to build something like that to test it and see for myself but I'm not that serious of a programmer and I've been kind of busy.

Thanks for answering my original question. Maybe the best way to get an answer to a question sometimes is not to ask one but to confidently say something that might wrong. lel.
 
  • Like
Reactions: Marvin and y a t s
Woke up annoyed and need to sperg a bit about programming twitter, th3ogg and this modern autism of programming influencers.

I recently started looking into a programmer that appeared either here, on the Linux thread or the open source thread, theo gg. This dude built an npm package that sends a call to an endpoint because he was angry with postman. https://www.npmjs.com/package/webhookthing

He puts in the json that should be sent and the URL, and sends it. Am I being autistic or shouldn't this have just been a curl POST call in whatever language? I get the point of using a 3rd party library for something more complex, but every language in the world has a way to make a simple POST call. What is the point of this?

When I was a little kid programming was explained to me as basically algorithms and putting logic gates together. You'd have some memory to store things, some ways to process and change that data, some ways to have things appear on a screen, make sounds, etc, and you'd turn that into a video game. That fascinated me, and it felt like a whole new world to explore. It felt difficult as in complex, it wasn't just >make thing appear<.

Now Th3oGG(I might just have random hate for this guy, but this name is very, very gay) and other influ-grammers like him absolutely hate on everything. Everything is bad, slow, they make it seem incredibly difficult to do anything of quality unless you use the tech he is sponsored by and uses of course.

They don't actually seem to know how to code, it's all just buzzwords, acronyms and lists of "technologies" upon "technologies", which in reality are mostly libraries of some form. I've seen job descriptions entirely based on libraries, though every programmer worth his salt should be able to learn how to use a new library. IDE's all have autocomplete anyway, if you use something for a month you'll rememeber the syntax you actually need and use.

Hating on languages is pretty silly. Javascript's syntax can be a bit convoluted, but it's quite fast and not that hard to learn. If you know one language, you can learn any other. Sure, you won't be as experienced in all the small things and some ultra optimization, but for most purposes it's fine. What we should really hate is the programmers. There must be something in the manuals, water, tutorials or hair products of javascript developers because they seem to all be completely fucking retarded. All their twitter takes are just >this thing BAD< or >check out latest technology omg this is going to revolutionize the world<. They almost never post code with these cool technology updates they talk about. They use neovim but are clueless on hosting their own app, some weren't even aware you can use javascript with PHP or that non-javascript backends even exist.

It's almost as if they're trying to make programming seem scary and gatekeep whatever positions they have because deep down they know they're really button pushers. You never see devs of super well made shit like ffmpeg talk about the acronyms and buzzwords, they usually tell you to learn assembly.

And what about "technologies" themselves? You go to the page of any buzzword, and it doesn't even explain what the fuck it does. It's all animations, big pictures, videos, testimonials, buzzwords, photos of other technologies and services. Let's take a random thing that appeared on twitter today, Prismatic. On their site it says

Low-code or code-native, we've got your back. The world's most versatile and dev-friendly embedded iPaaS gives every team in your B2B SaaS tools they'll love for building, supporting, and embedding integrations.
Their videos show nothing of what language this even is for, how you use it, an example of what it can do, it's just "our product makes it easy to integrate all this shit that other products can't". Is this just a library that has a lot of methods/classes/functions to use with <insert platform name here>? Maybe Prismatic is actually super cool and useful, but you'd have no idea by looking over their website.


Back to Theo, here's the cream of the crop, Uploadthing. What is uploadthing you ask? Check this spoilered huge diagram.


Uploadthing is just a wrapper around S3. You don't call S3 directly, you call his server, then the server gives you the S3 URL that you directly POST to. Then he gets the metadata from S3 and just forwards it to you.

So what's the point of his service? This is his main product, he has investors for this. Am I missing something here? Everything he's done seems to be useless.
 
Back