Floating Point Discussion

Alex Hogendorp

Pedophile Lolcow
kiwifarms.net
Joined
Apr 20, 2021
One of the most widespread uses of technology is any kind of floating point arithmetic stored in many different kinds of bits. The most common being that of Single (32-bit) and Double (64-bit) floating point precision formats with Quadruple (128-bit) ones being rarely used and not a single hardware using Octuple (256-bit) whatsoever. These formats dictate how computers function IRL for positions, score, statistics, mathematics, timekeeping, etc. However are limited to a series of numbers (usually a power of 2) before overflowing.

In January 19, 2038 for example (A time eurofags and niggers would sleep through as it happens at 3AM for them), 32-bit computers would enter a timekeeping overflow for the first time as it's epoch is set to January 1, 1970 (setting a time date like that bricks some phones by the way). The Unix surpasses 2,147,483,647 which is the signed 32-bit limit. For these computers, it would roll over to -2,147,483,648 or Year 1903. For 64-bit computers, the first overflow won't happen for billions of years.

The absolute limit 32-bit computers can calculate is close to 10^308 where it rolls over to Infinity or NaN. Desmos is a powerful calculator that takes liberty in Calculating large numbers but can never go past that threshold. You can zoom out as far as 10^300, once it goes that far. You can't zoom out any further as doing so may cause the app to crash or behave erratically.

It is widely understood in video games such as Minecraft that going out far enough causes the game to act more strange. Hence the famous artifact known as the Farlands that would generate at 12,550,820 blocks from spawn. As well as notable precision loss when placing objects. This led to series of modern expanding the limits of Minecraft to get a full extent of the structure where the further you go, the artifact breaks down into fringes around a decillion blocks from spawn. Well past the 64-bit and even 128-bit and even the 256-bit limit (I don't know how they do it but they do).
 
In January 19, 2038 for example (A time eurofags and niggers would sleep through as it happens at 3AM for them), 32-bit computers would enter a timekeeping overflow for the first time as it's epoch is set to January 1, 1970 (setting a time date like that bricks some phones by the way). The Unix surpasses 2,147,483,647 which is the signed 32-bit limit. For these computers, it would roll over to -2,147,483,648 or Year 1903. For 64-bit computers, the first overflow won't happen for billions of years.
This isn't to do with floats, though. It's also not strictly to do with the underlying architecture; recent versions of Unix-like systems change time_t to be 64 bits regardless.

Anyway, here are two nice IEEE-754 visualizers for the curious:

Edit: And here's William Kahan talking about creating IEEE-754.
Kahan on creating IEEE Standard Floating Point (archive)
 
In January 19, 2038 for example (A time eurofags and niggers would sleep through as it happens at 3AM for them), 32-bit computers would enter a timekeeping overflow for the first time as it's epoch is set to January 1, 1970 (setting a time date like that bricks some phones by the way).
This has nothing to do with floating point numbers. time_t is an integer type.
The Unix surpasses 2,147,483,647 which is the signed 32-bit limit. For these computers, it would roll over to -2,147,483,648 or Year 1903. For 64-bit computers, the first overflow won't happen for billions of years.
Version 5.6 of Linux introduced support for 64-bit time_t on 32-bit architectures three years ago. NetBSD and OpenBSD have supported it for even longer (11 and 9 years, respectively).
 
A 64-bit number is the largest number you can natively work with on a modern CPU with a single instruction but actually if you just want to work with 128 bit numbers it's fine too - you just need to use more instructions to perform additions. Internally, you'd allocate two 64-bit numbers, using the latter for overflow. When adding to your number, you'd have to check to see how large each number is and if the first is going to overflow, you add to the second instead. To display it, you'd then have to do write some extra code to combine the numbers in a human readable format. Most higher level programming languages have APIs that do this for you, for example in C# you have BigInteger which can handle this for you, presumably by allocating an internal array of int64s to handle an arbitrarily large number.

I'm not sure about the technical reason for the farlands but I have experienced strange artifacts when coding graphics - If you get too far from your origin, the errors become very pronounced. The solution can be to change how you render, by moving the world around the player as opposed to the player around the world, or to find a convenient point to simultaneously teleport the player and the world that surrounds them so as to keep the margin of error low.
 
  • Like
Reactions: Geranium
If you're not dealing exclusively with whole numbers, simply multiply everything in your program by ten until every number is whole. This has numerous benefits, including but not limited to not having to deal with bullshit goddamn floats.

If you ever feel the need to use something like pi or e, stop! These are letters, not numbers. Circles are retarded anyway; opt for squares or triangles instead.
 
Last edited:
It is widely understood in video games such as Minecraft that going out far enough causes the game to act more strange. Hence the famous artifact known as the Farlands that would generate at 12,550,820 blocks from spawn. As well as notable precision loss when placing objects. This led to series of modern expanding the limits of Minecraft to get a full extent of the structure where the further you go, the artifact breaks down into fringes around a decillion blocks from spawn. Well past the 64-bit and even 128-bit and even the 256-bit limit (I don't know how they do it but they do).

Shoulda used a quadtree with local coordinates per quad, and he never would have run out of space.
 
2023-09-06_10.37.09.png

Up until sometime recently around 2022 (or maybe 2016), nobody has ever seen this artifact in Minecraft Java Edition. This happens very early in Bedrock Edition but until then, nobody knew how it happened.
 
What keeps me awake at night is floating-point determinism. Yes, the inaccuracy of floating-point is nasty, but consider how different computers will compute certain transcendental operations differently. This can be solved by software implementation of course, but it still makes me wince.

I wonder how easy it would be to get perfectly reproducible GPGPU physical simulations that can be compatible across card manufacturers. It would be extremely cool but probably very difficult due to shitty drivers doing retarded things like reordering FP ops.
 
  • Feels
Reactions: Vecr
actually if you just want to work with 128 bit numbers it's fine too - you just need to use more instructions to perform additions.
Modern CPUs support fast 128bit integer arithmetic as part of AES hardware acceleration, since AES processes data in 128bit blocks.
 
Back