I have found good reason to not program in 32 bit, because Windows 64 bit can only emulate 32 bit software using WoW64, which reduces the efficiency of using 32 bit software from a native 32 bit operating system by 5%, and rebuilding the code for a 32 bit application to 64 bit increases efficiency anywhere from 5% to 15%.
There shouldn't really be any
performance loss running a 32-bit application on a 64-bit Windows installation on x86/amd64 hardware.
Compiling a 32-bit application to run natively on 64-bit hardware doesn't automatically improve performance either in most situations, though there are some cases where it does (e.g. some loops can run quite a bit faster on amd64 since the hardware offers more general-purpose registers, allowing the processor to do more work with fewer fetches from cache or main memory).
It
does confer other benefits though, like access to more memory and the ability to memory-map files larger than 4GB. If the program is doing anything "clever" that depends on basic data types being a certain size (i.e. 32-bit pointers, 32-bit integers for the int type, etc.) and hardcodes those expected sizes in some way, it'll either break in subtle or unexpected ways, or it'll still have trouble poking around beyond 4GB of memory. Updating it to remove that cleverness and rely on the standard typedefs instead will fix that and often gives the compiler a chance to optimize that code more.
The compiler is often also smart enough to automatically use optimized 64-bit instructions for some program operations that do the work more quickly or efficiently rather than an instruction that's just literally "do the same thing as its 32-bit counterpart, but on the 64-bit hardware."
For instance, if you're doing simple arithmetic with 32-bit integers and you're doing a bunch of it at once (and they happen not to depend on each other's results, e.g. you're computing separate totals for a bunch of different "things" independent of each other), there might be 64-bit instructions that can accept both 32-bit terms in a single 64-bit register and store the result into a 32-bit or 64-bit register (instead of requiring one register for each term and a third for the result). Since there are more registers available and since those simple operations use fewer registers, you can get more of them done before you need to fetch more data from cache/memory. Add out-of-order (speculative) execution and threaded execution features of the CPU, and you can end up seeing those simple arithmetic operations being performed essentially simultaneously (or really close to it). For some stuff, that's not a big deal, but for things like media encoding, compression, encryption and so on, it can provide a decent boost.
Beyond that automatic stuff, to get a noticeable performance gain from taking a program from 32-bit to 64-bit, it needs to be doing work that can actually benefit from the change, and it needs to be ported a bit so that it makes use of 64-bit APIs and CPU features that aren't present on 32-bit hardware. It doesn't usually take much effort (the new features usually get exposed through updates to the standard library or through add-on libraries) but it still needs to be done by hand.
How does C++ or regular C compare with Rust?
C/C++ are much more mature, enjoy widespread use, have an absurd amount of quality tools available, have high-quality standard libraries, are ubiquitous and available on practically every imaginable platform (OSes and hardware) from itty-bitty hobby boards to gargantuan overpriced mainframes. Rust is newer, less mature, not nearly as widely used, and is being pushed aggressively (to many people's annoyance) by Google and by trannies, which automatically makes it suspect.
So if Rust is supposed to be superior to C, then why does it take so much compile time?
I don't think it's superior to C/C++. Its compilation is slow partly because it's a new language and the toolchain hasn't matured enough, and partly because its developers are so busy pushing the language they don't spend a lot of time actually improving its tools.
Who else can achieve 6 nines of up time? Have you heard Facebook is working on a typed Erlang compiler?
Nice! I seriously need to spend some quality time learning Erlang properly to advance beyond "curious dabbler" territory with it.
If you get outside of the normal codemonkey business projects, it's used quite a bit. I programmed in it for 3 years on a gov TS project which I obviously cannot go into detail about, but let's just say it was shit involving "critical defense communication under extremely low baud rates". Shooting the shit around base with other gov contractors (if you have a TS clearance but still private sector, you're in a small pool so you know a lot of the other ppl), they all knew of or worked on previous Erlang projects.
Because of that little bit of Erlang experience, I had dropbox people pounding my phone for months asking for interview/relocations.
That's great to hear! Maybe it's time to seriously consider a career shift into that area to escape from the modern day "nodejs backend and react/vue frontend" business loop. It pays handsomely but fuck me it's super-boring shit.
I've done some DoD contracting in the past and had a Tricare-related security clearance for many years, so at least I'm "in the system." Maybe it wouldn't be such a large leap to get back into it if I've got some Erlang knowledge to sweeten the deal.
I see the Rust "community" is already tying itself into a pretzel, trying to explain why Rust didn't replace C as it was extensively "advertised" as such.
Turns out, again and again, that the best or easiest explanation for a failure is incessant goalpost moving. Now it's going to replace C++. Next up, it's going to replace Go or Java or Python or whatever.
Every "this is gonna replace C/C++!" language goes through this learning process the hard way. It's fun to watch. It also tickles me that Google is backing Rust so hard, and not its own language (Go). Talk about not having faith in your own products.