Programming thread

I had to write a full Scheme compiler in C using the classical CPS+trampoline based approach.
Nice.

I'd like to see students tasked with a Cheney on the MTA approach nowadays.
I have always wondered what the web would be like with something closer to pure Scheme or maybe Racket. JS's weird hybrid nature lets web devs make some of the worst design decisions possible and can really harm readability if sane decisions weren't made early on, and that's my primary beef with it. Racket, for example, can work as a functional/imperative hybrid if you really want it to, but at some point you might as well give up and use Python or something instead (or hy, which I have found pretty fun to work with). If it weren't for retards in professional development roles and all of the god awful shit they make with JS, I would likely be singing its praises a lot more.
I would prefer Scheme, but I think I've been burned too many times by trusting any kind of standardization body with setting up any kind of standard for something like this.

I would prefer something more low level, like a standard for an abstract machine language or maybe something like Tcl.

You can build whatever kind of language you want on such a simple substrate and you have no one to blame but yourself if weird warts develop in your language.
 
Nice.

I'd like to see students tasked with a Cheney on the MTA approach nowadays.
There are a million different ways to implement Turing-complete programming languages, and they're all cool.
I would prefer Scheme, but I think I've been burned too many times by trusting any kind of standardization body with setting up any kind of standard for something like this.

I would prefer something more low level, like a standard for an abstract machine language or maybe something like Tcl.

You can build whatever kind of language you want on such a simple substrate and you have no one to blame but yourself if weird warts develop in your language.
JavaScript isn't too bad as a primary Web language, it could always be worse. There is WebAssembly as an abstract machine language, but it doesn't have DOM support and you need JS to initialize it anyway. If all JS engines and JS code were fully deleted from existence, an abstract bytecode VM would probably be part of whatever gets built to replace it, and people would use C, C++, and whichever C/C++ replacement is popular this week on it. As it stands, JS is okay for adding simple client-side interactivity to HTML pages, which is what it was designed for.
 
  • Agree
Reactions: y a t s and Marvin
Though it can, with Node!


I think disabling JS is pretty much impossible in the browser nowadays, unless you're very determined. And Googlebot is supposedly able to run simple Javascript code for page-rendering, but I don't know how well it does it.
So do people use node for the backend and mySQL as their database?
 
So do people use node for the backend and mySQL as their database?
There are multiple technology stacks that people use to build websites. These stacks are basically suites of software components that work well together for making websites. You asked a question about what is probably LAMP (Linux, Apache, MySQL/MariaDB, PHP) but there are others like MERN (MongoDB, Express, React, Node) that use server-side JavaScript. You can use any language and database as a backend. All the backend needs to do is support some kind of standard API like HTTP or FastCGI, and the server you're using can do the rest. You could make your website using C and it would be interchangeable with PHP or JavaScript from the user's point of view.
 
After doing some more work on the server, I'd like to reformat my question. Because it's become pretty clear that the server itself isn't going to have any problems even if I make everything a generic 64 bit int.

The issue that's become much clearer is packet size. If I micromanage my types, I can cut down packet size to a fraction of what it would be otherwise. A single NPC position update would be around 40 bytes with standard ints, but 8 if I make all my fields as small as possible. This game doesn't work with big numbers, so 99% of the data I'd send with 64 bit ints would just be useless padding.

Now, I don't actually know whether or not saving 32 bytes per packet would have any real world meaning. Of course it's always good to reduce junk data and slash bandwidth, but it comes with a cost. For example, if each packet consists of all the same sized data, it's super easy to parse. Mixed sizes, not so much. So it's a question of whether or not reducing packet size of worth the extra effort it takes.

It's an extremely ballpark estimate, but I'd guess at peak my server would be sending somewhere in the ~500,000 packets per second range. That's 32 MB/s for unoptimized packets, 4 for optimized. And that's BEFORE TCP overhead (yes I'm using TCP, yes I have my reasons). Neither is exactly insurmountable with a modern connection, but upstream is expensive.
 
After doing some more work on the server, I'd like to reformat my question. Because it's become pretty clear that the server itself isn't going to have any problems even if I make everything a generic 64 bit int.

The issue that's become much clearer is packet size. If I micromanage my types, I can cut down packet size to a fraction of what it would be otherwise. A single NPC position update would be around 40 bytes with standard ints, but 8 if I make all my fields as small as possible. This game doesn't work with big numbers, so 99% of the data I'd send with 64 bit ints would just be useless padding.

Now, I don't actually know whether or not saving 32 bytes per packet would have any real world meaning. Of course it's always good to reduce junk data and slash bandwidth, but it comes with a cost. For example, if each packet consists of all the same sized data, it's super easy to parse. Mixed sizes, not so much. So it's a question of whether or not reducing packet size of worth the extra effort it takes.

It's an extremely ballpark estimate, but I'd guess at peak my server would be sending somewhere in the ~500,000 packets per second range. That's 32 MB/s for unoptimized packets, 4 for optimized. And that's BEFORE TCP overhead (yes I'm using TCP, yes I have my reasons). Neither is exactly insurmountable with a modern connection, but upstream is expensive.
TCP is just fine. You just can't make any reasonable bets on how it performs. Quit micromanaging or just switch to UDP. In fact, I recommend against switching to UDP.

You're fine. If you're not, start profiling and figure out where your limitations are, and then maybe switch to UDP. Otherwise you're fine.
 
Kind of basic question here. I want to have a class (C++) that uses an external library like SDL. SDL has an object defined in SDL.h called SDL_Window. I want have this object as a member of my class but if I do this in Foo.h
Code:
#include "SDL.h"
#include "SDL_Vulkan.h"

class Foo {
public:
    Foo();
    ~Foo();

private:
    SDL_Window* my_window;
};
any other files that include Foo.h are also going to include SDL.h and SDL_Vulkan.h. Is there a way to include the SDL headers in Foo.cpp while still using the SDL objects in the header?

ETA: I ended up just forward declaring it like:
Code:
Foo.h
------
struct SDL_Window;

class Foo {
public:
    Foo();
    ~Foo();

private:
    SDL_Window* my_window;
};

Foo.cpp
--------
#include <SDL/SDL.h>
etc, etc
So my question now is: Is this the best way to do it? I'm gonna feel really dumb if it is.
 
Last edited:
  • Thunk-Provoking
Reactions: Creative Username
the compiler needs to know the actual size of the class so it can assign the correct amount of memory at instantiation.
I suspected as such. What if you declared void*s and int<number>_ts? It would work, but would it be portable? I think so, but the standard can be funny sometimes.
 
After doing some more work on the server, I'd like to reformat my question. Because it's become pretty clear that the server itself isn't going to have any problems even if I make everything a generic 64 bit int.

The issue that's become much clearer is packet size. If I micromanage my types, I can cut down packet size to a fraction of what it would be otherwise. A single NPC position update would be around 40 bytes with standard ints, but 8 if I make all my fields as small as possible. This game doesn't work with big numbers, so 99% of the data I'd send with 64 bit ints would just be useless padding.

Now, I don't actually know whether or not saving 32 bytes per packet would have any real world meaning. Of course it's always good to reduce junk data and slash bandwidth, but it comes with a cost. For example, if each packet consists of all the same sized data, it's super easy to parse. Mixed sizes, not so much. So it's a question of whether or not reducing packet size of worth the extra effort it takes.

It's an extremely ballpark estimate, but I'd guess at peak my server would be sending somewhere in the ~500,000 packets per second range. That's 32 MB/s for unoptimized packets, 4 for optimized. And that's BEFORE TCP overhead (yes I'm using TCP, yes I have my reasons). Neither is exactly insurmountable with a modern connection, but upstream is expensive.
TCP is going to cost you at least 20 bytes per packet.
UDP 8

BUT. wait... there's more.
IP itself costs you 20 bytes... TCP now 40, UDP 28
IPv6 is 40 bytes instead, you're going to support IPv6, right...

BUT, wait... there's still more
Presumably you're going to send these packets over some sort of "Network" which initially goes over a "Wire"... which ALSO has a header.
Ethernet costs you another 14 bytes, (the framing and header lengths on your WAN link upstream of you to your ISP may vary but Ethernet is a good minimum)

Your IPv4 TCP packet is now 62 bytes long and you haven't sent any data.

So, at the end of the day, your data payload being 40 vs 8 will be less than a factor of 2. Pushing those tiny packets around is going to suck up CPU like you won't believe anyway, it's likely the absolute data size will be almost irrelevant.
 
So my question now is: Is this the best way to do it? I'm gonna feel really dumb if it is.
Yes it is.

There's also the "pimpl" pattern:
Code:
struct Foo_impl;

class Foo {
public:
    Foo();
    ~Foo();

private:
    Foo_impl* impl;
};

The idea being that all the real work is done in Foo_impl, and you can do what you like with it without bothering anybody else. It does add the overhead of dereferencing a pointer though, so it's not necessarily the best way of doing things.
 
Your IPv4 TCP packet is now 62 bytes long and you haven't sent any data.

So, at the end of the day, your data payload being 40 vs 8 will be less than a factor of 2. Pushing those tiny packets around is going to suck up CPU like you won't believe anyway, it's likely the absolute data size will be almost irrelevant.
Fair point. Minimizing packet size is probably not terribly important unless my game becomes massive enough to warrant micro-optimizations. If anything I should focus on combining data into bigger, less frequent packets, which was my plan anyway. The (frankly absurd) overhead makes small packets a generally bad idea unless it can't be avoided.
 
  • Like
Reactions: Creative Username
I fucking hate Visual Basic, is there a less painful way to code and build c++?
 
I have always wondered what the web would be like with something closer to pure Scheme or maybe Racket.
It's not so much a Scheme as it is a Haskell, but Elm is an interesting take on web design. I find the language itself to be pleasant and very natural to develop in even though it does railroad your design into its own MVC loop. Partial application feels awesome, it's a very clear solve for coupling. Unfortunately, it seems like the project is well and truly dead. The interface for calling JS still feels like it's a little bit hacked together, the JSON decoder/encoder took a couple tries to figure out and strong typing really punishes additions sometimes.
hy, which I have found pretty fun to work with
I do love Lisp syntax and I do envy all of Python's libraries. Looks like I'll need a new project to start in hy.
 
  • Like
Reactions: y a t s
i have a genuine question for you dev boys.
i remember talking to some senior dev troon on discord and he said that you dont need to optimize much as the compiler does it for you.
now i know compilers do apply some optimizations but i also know that they arent a catch all and that some shit doesn't get optimized.

the troon argued with me saying my views were outdated and i ended up ignoring him but it does still linger on my mind.

is it actually beneficial to optimize your code before compile or not? i know that it depends on the code itself but im talking about in general.
 
  • Like
Reactions: Creative Username
Back