Programming thread

What language and database should I use for the backend of my websites? I could use JavaScript, node.js, and mongoDB, Ruby on Rails, or php and MySQL. There’s so many, I don’t know which one to choose. If it helps I want to make forums for my websites and maybe get a job so what would you guys recommend?
 
What language and database should I use for the backend of my websites?
If you would like to get a job, C# is a very commonly used language in business and if you know it and the associated ecosystem well you'll never be unemployed. Of course, it's not the only ecosystem.

There are tons of jobs in PHP also, although they are pretty much exclusively in the web development/digital agency space. Personally I don't like the language, however - this is mainly because C# has type safety baked into the language and compiler from the start, whereas it was added to PHP over many years and often needs to be opted in to. The default is still to coerce strings containing numbers to integers if they are involved in an integer operation (unless strict_types is on), which can introduce unexpected errors and performance problems. This isn't to say you can't write excellent code in PHP, but it's definitely rarer to find.

You might also choose to use Node because you were already working on learning Javascript as I recall, but the two styles of programming are very different, and type safety was never baked into the language. Typescript is nice but not perfect.
 
What language and database should I use for the backend of my websites? I could use JavaScript, node.js, and mongoDB, Ruby on Rails, or php and MySQL. There’s so many, I don’t know which one to choose. If it helps I want to make forums for my websites and maybe get a job so what would you guys recommend?
The MERN stack (MongoDB, Express, React, Node) is very easy to work with and uses Javascript everywhere, so you don't need to juggle multiple languages.
It also has tons of online documentation and video tutorials, so it's easy to learn.
If you search for "Forum built with MERN stack", you'll find guided projects that go through every step and teach you along the way.

C# and Java have some good web frameworks now (Blazor for C# and Spring for Java) and are very popular on the job market like glow said (especially Java where I live), but they're very verbose and opinionated on how you should work with them, so I don't think it makes sense for small personal projects to use them, especially if it's your first time building a full web app.
 
If it's your first time doing something like this, I'd say PHP and MySQL. Everyone hates PHP, but it refuses to die.
PHP 9 might be a breakthrough, or it could be terrible. Depends on what the PHP developers decide is wrong today.

They've been messing with the language at a breakneck pace lately, deprecating and removing things that were broken and/or stupid. The men at the helm seem to know where they're going but coming from a Perl background it'll be a while before I forgive them for deprecating the ${var} interpolation syntax.

What language and database should I use for the backend of my websites? I could use JavaScript, node.js, and mongoDB, Ruby on Rails, or php and MySQL. There’s so many, I don’t know which one to choose. If it helps I want to make forums for my websites and maybe get a job so what would you guys recommend?
You can't go wrong with a LAMP (Linux-Apache-MySQL-PHP) stack to start with, it's still very popular. Nowadays you'll find MariaDB is often the "M" there, it's mostly a drop-in replacement and doesn't have Oracle's sticky fingers on it.
 
I've been doing a lot of programming in Go lately, and I've caught myself doing optimizations that I'm not sure are even worth it and wanted to get everyone's opinions. Basically, I'm trying to use the smallest type of int possible whenever I can. If I know a number is never going to go above 255 or below 0, I'll use a uint8. If it's never going above 65535, a uint16. And so forth.

It's not exactly a ton of extra effort, but I also don't know if it's just getting compiled away, or if the compiler would end up doing the same or better job handling everything as a standard int. I've also heard that, because processors operate on words rather than bits/bytes, this kind of thing ultimately doesn't matter since they all take up one word of memory anyway.

As for the type of app, it's a server for a game. Lots and lots of structs and lots and lots of TCP packets that all needs to operate with as little delay as possible, so it really does matter whether or not an NPC's health takes up 64 or 16 bits.
 
  • Like
Reactions: y a t s
I've been doing a lot of programming in Go lately, and I've caught myself doing optimizations that I'm not sure are even worth it and wanted to get everyone's opinions. Basically, I'm trying to use the smallest type of int possible whenever I can. If I know a number is never going to go above 255 or below 0, I'll use a uint8. If it's never going above 65535, a uint16. And so forth.

It's not exactly a ton of extra effort, but I also don't know if it's just getting compiled away, or if the compiler would end up doing the same or better job handling everything as a standard int. I've also heard that, because processors operate on words rather than bits/bytes, this kind of thing ultimately doesn't matter since they all take up one word of memory anyway.

As for the type of app, it's a server for a game. Lots and lots of structs and lots and lots of TCP packets that all needs to operate with as little delay as possible, so it really does matter whether or not an NPC's health takes up 64 or 16 bits.
I think this is just good coding practices
 
I've been doing a lot of programming in Go lately, and I've caught myself doing optimizations that I'm not sure are even worth it and wanted to get everyone's opinions. Basically, I'm trying to use the smallest type of int possible whenever I can. If I know a number is never going to go above 255 or below 0, I'll use a uint8. If it's never going above 65535, a uint16. And so forth.

It's not exactly a ton of extra effort, but I also don't know if it's just getting compiled away, or if the compiler would end up doing the same or better job handling everything as a standard int. I've also heard that, because processors operate on words rather than bits/bytes, this kind of thing ultimately doesn't matter since they all take up one word of memory anyway.

As for the type of app, it's a server for a game. Lots and lots of structs and lots and lots of TCP packets that all needs to operate with as little delay as possible, so it really does matter whether or not an NPC's health takes up 64 or 16 bits.
I don't know go, but structs are your friend. Properly packed. You'll find it's a trade-off between alignment performance and size. Network packets have a given overhead so a byte or two that improves CPU performance is worth it, 500 extra bytes, probably not. Also, minimize copying of data, and REALLY minimize anything that involves moving elements around one at a time. That is to say the in-memory struct should be the network wire struct. Don't do "net.health = internal.health; net.xp = internal.xp...."

Google of "golang struct performance" seems to give some hints.

Testing is often the only way to be sure.
 
I don't know go, but structs are your friend. Properly packed. You'll find it's a trade-off between alignment performance and size. Network packets have a given overhead so a byte or two that improves CPU performance is worth it, 500 extra bytes, probably not. Also, minimize copying of data, and REALLY minimize anything that involves moving elements around one at a time. That is to say the in-memory struct should be the network wire struct. Don't do "net.health = internal.health; net.xp = internal.xp...."
If endianness isn't an issue, sure. I don't know if Go is handholdingy enough to stop you from doing this, on the grounds that it's technically UB (in C).

It's not exactly a ton of extra effort, but I also don't know if it's just getting compiled away, or if the compiler would end up doing the same or better job handling everything as a standard int. I've also heard that, because processors operate on words rather than bits/bytes, this kind of thing ultimately doesn't matter since they all take up one word of memory anyway.
I'd always give the compiler the opportunity to make the optimization, even if it doesn't take it.
 
I've been doing a lot of programming in Go lately, and I've caught myself doing optimizations that I'm not sure are even worth it and wanted to get everyone's opinions. Basically, I'm trying to use the smallest type of int possible whenever I can. If I know a number is never going to go above 255 or below 0, I'll use a uint8. If it's never going above 65535, a uint16. And so forth.

It's not exactly a ton of extra effort, but I also don't know if it's just getting compiled away, or if the compiler would end up doing the same or better job handling everything as a standard int. I've also heard that, because processors operate on words rather than bits/bytes, this kind of thing ultimately doesn't matter since they all take up one word of memory anyway.

As for the type of app, it's a server for a game. Lots and lots of structs and lots and lots of TCP packets that all needs to operate with as little delay as possible, so it really does matter whether or not an NPC's health takes up 64 or 16 bits.

There's no compute benefit to using small int types on modern CPUs. x86 register are all 32 bits, so, if you do 8-bit math, as far as the CPU is concerned, it reads a 32-bit int and does 32-bit math on it, with a switch flipped internally to ensure 255 + 1 = 0. However, small types are unaligned. If you malloc an array of 8-bit ints, 4 int8_ts will be packed into each 32 bits. I don't know how Go handles structs, but C aligns structs based on their largest member:
https://levelup.gitconnected.com/how-struct-memory-alignment-works-in-c-3ee897697236

If you do need to optimize performance & memory at that level, though, use structs of arrays.

Bad (C++ not Go because I don't learn heretic languages):

Code:
struct Players{

   struct Player
   {
       std::array<3, float> xyz;
       uint16_t id;
       uint16_t health;
       uint8_t ammo;
   };

   std::vector<Player> players;

   struct Player &getPlayer(int i )
   { return players[i]; }
};

Good:

Code:
struct Players{
 
   struct Player
   {
       float &x
       float &y
       float &z;
       uint16_t &id;
       uint16_t &health;
       uint8_t &ammo;
   };

   std::vector<float> x;
   std::vector<float> y;
   std::vector<float> z;
   std::vector<uint16_t> id;
   std::vector<uint16_t> health;
   std::vector<uint8_t> ammo;

   struct Player getPlayer(int i )
   { return {x[i], y[i], z[i], id[i], health[i], ammo[i]}; }
};

Benefits of the latter is you don't have to worry about getting the data packing right, and the memory controller can better optimize loops through player data since you have multiple memory streams with small strides rather than one stream with a big stride.
 
Last edited:
I've been doing a lot of programming in Go lately, and I've caught myself doing optimizations that I'm not sure are even worth it and wanted to get everyone's opinions. Basically, I'm trying to use the smallest type of int possible whenever I can. If I know a number is never going to go above 255 or below 0, I'll use a uint8. If it's never going above 65535, a uint16. And so forth.

It's not exactly a ton of extra effort, but I also don't know if it's just getting compiled away, or if the compiler would end up doing the same or better job handling everything as a standard int. I've also heard that, because processors operate on words rather than bits/bytes, this kind of thing ultimately doesn't matter since they all take up one word of memory anyway.

As for the type of app, it's a server for a game. Lots and lots of structs and lots and lots of TCP packets that all needs to operate with as little delay as possible, so it really does matter whether or not an NPC's health takes up 64 or 16 bits.
This is a very common practice in embedded systems work. Just be mindful of memory alignment when packing small types into structs. The CPU will always read at its word size, so reading unaligned bytes means the more expensive reading of the multiple words your data lies between.

Some CS prof buddies of mine have made some good arguments for using smaller types to create more well-defined behaviour by limiting the range of values certain parts of your program work with. You have to be a lot more mindful of handling potential wraparound if you do this though.

So there are benefits to doing it as long as you do it in a thoughtful manner.

Edit: Here is an old IBM technical article about alignment that I like a lot.
 
I've been doing a lot of programming in Go lately, and I've caught myself doing optimizations that I'm not sure are even worth it and wanted to get everyone's opinions. Basically, I'm trying to use the smallest type of int possible whenever I can. If I know a number is never going to go above 255 or below 0, I'll use a uint8. If it's never going above 65535, a uint16. And so forth.

It's not exactly a ton of extra effort, but I also don't know if it's just getting compiled away, or if the compiler would end up doing the same or better job handling everything as a standard int. I've also heard that, because processors operate on words rather than bits/bytes, this kind of thing ultimately doesn't matter since they all take up one word of memory anyway.

As for the type of app, it's a server for a game. Lots and lots of structs and lots and lots of TCP packets that all needs to operate with as little delay as possible, so it really does matter whether or not an NPC's health takes up 64 or 16 bits.
Don't bother. You're painting yourself into a corner where the compiler would probably do a much better job than you would by hand.

And if it's for the sake of TCP performance, you're already missing the boat. TCP performance is unpredictable by default.

Write it logically, abstract as much as you can away. Leave room for this kind of fiddly optimization for when you need it, but don't start off getting into the weeds from the beginning.

Let the compiler do its best job at it, then profile and find out where your hot spots are. And then get into the fiddly bullshit.
 
Small type sizes are better for reducing memory usage and serialization size, and don't really matter as much for general performance. They are useful for increasing SIMD efficiency IIRC, if you are willing to go Real Programmer and do high-performance logic with assembly/intrinsics.

If you have millions of these entities, these optimizations might be lifesavers. If you have dozens, it doesn't matter. Like others in this thread, it's a great idea to profile and test to see what exactly needs optimization. A lot of gains in your project could be made architecturally (making things multithreaded, switching from TCP to UDP, or using weird delta shit in your network format.)

Another optimization that would help with large numbers of entities would be doing what @The Ugly One suggested and using structs of arrays instead of arrays of structs. This has a buzzword name, it's called "data-oriented programming".

NOTE: For now, do none of these. Make your shit work before trying to make the server able to do 4000 ticks per second with 20,000 entities, because if you optimize first you will have brittle code that can't be molded into doing the things you want easily.
 
Do you guys use classes for javascript? I havent learned about them yet and dont functions, objects, and modules pretty much do everything a class can? I know classes are important for other languages like Java but are they for javascript?
 
  • Thunk-Provoking
Reactions: Creative Username
Do you guys use classes for javascript? I havent learned about them yet and dont functions, objects, and modules pretty much do everything a class can? I know classes are important for other languages like Java but are they for javascript?
Main appeal of classes in JS is that they're familiar to devs coming from other OOP languages. Last time I checked, prototypes were better in terms of performance.
 
Do you guys use classes for javascript? I havent learned about them yet and dont functions, objects, and modules pretty much do everything a class can? I know classes are important for other languages like Java but are they for javascript?
The main thing that classes give you, other than that they're nice to use, is private and static fields. You might be able to achieve this with some bastard arrangement of global variables, or define them in a closure somewhere so that they're not accessible to the outside, but classes will just package them in naturally.

A private field is only accessible within class methods, e.g. you could do:
JavaScript:
class MyClass {
  #privateValue;
  getValue() {
    return this.#privateValue;
  }
  setValue(val) {
    this.#privateValue = val;
  }
}
Obviously this isn't a particularly useful example, but now any MyClass object will have get/set functions that access the private #privateValue field that won't be accessible outside otherwise. You could use private fields to store aspects of the object's internal state in a way that wouldn't be meaningful to the outside, or you can use private fields which are methods to perform internal stuff that you'll only need to call from other class methods.

Static fields are global to the class, so if MyClass had a defined static #staticValue; then any class method could access MyClass.#staticValue, but only class methods could access it. Note that any instance of the class could access the static value, and it'd be the same value. You could do something like have the class keep track of how many instances of it have been created (or are "active").
 
Last edited:
Small type sizes are better for reducing memory usage and serialization size, and don't really matter as much for general performance. They are useful for increasing SIMD efficiency IIRC, if you are willing to go Real Programmer and do high-performance logic with assembly/intrinsics.

The best advice I ever got on this came from an Intel dev: "99% of anything you want to do with SIMD is already in MKL, and the rest gets picked up by the compiler." His point was at best, handrolling intrinsics gets you single-digit benefits at the application level, compared to letting the compiler handle it, and for any kind of bulk computation (like a FFT or something), it's already been done by somebody smarter than you. I was told this 10 years ago and have yet to find a counterexample.
 
Back