GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

This point is always interesting to me. Despite the speed increases ( or lack thereof), the ability of the cores to do work generally has increased. I remember hearing about floating point numbers first when reading about the OG Xbox. What are they generally and what makes them difficult for a core to process?
Floating point is a way to represent numbers. There are many standards, but modern hardware all use the same one, luckily. If you’re interested, they’re not actually difficult to learn.
 
  • Informative
Reactions: WelperHelper99
This point is always interesting to me. Despite the speed increases ( or lack thereof), the ability of the cores to do work generally has increased. I remember hearing about floating point numbers first when reading about the OG Xbox. What are they generally and what makes them difficult for a core to process?

Moderately long explanation.

Remember that computers are nothing more than enormous banks of switches. Flipping one switch causes other switches to flip, and all computer logic amounts to construction complex arrays of switches that do interesting things when you flip them in various patterns.

Now, in base two, five bits, the number 5 is represented thus:
0 0 1 0 1

This is ten:
0 1 0 1 0

This is twenty:
1 0 1 0

So to multiply 5 x 2, you can see all we had to do was shift everything to the left. To divide by 2, we shift to the right. You might guess that such an operation does not take a very complex array of switches, and you are right. Doing math with integers can be done with comparatively simple circuits. Doing math where the decimal can slide all over the place (watch the video @snov linked) is more complicated and requires a much more complicated array of switches. It is so complex that, back when CPUs were manufactured on a 1500 nm process, you couldn't even fit all the needed transistors on the chip to do floating-point math. You had to have a whole separate accelerator just to do that. Here is an example of such an accelerator. This one was made by Intel in 1986, probably on their 1500 nm node.

1708023358047.png

Floating-point arithmetic is extremely important for precise calculations. Physics simulations and CAD software all use floating-point math heavily. 3D games from the 1990s actually do not do much floating-point math. They tend to rely on integer math instead. Turns out 32-bit integers are plenty precise to do 3D calculations when you're drawing to a 320x240 target. However, modern games typically use floating-point math to transform and manipulate 3D geometry.

A floating point unit can ordinarily do one operation every clock cycle. So if you want to add or multiply pairs of numbers, sans any other optimizations, a 3.5 GHz CPU can do that 3.5 billion times a second, and a 100 Mhz CPU can do that 100 million times a second.
 
I'm stuck in the never ending loop of waiting for a newer/better waste of money for a gpu. Was considering the 4080Super but I'm not super sold plus its a fuckin grand

I hate computers
This is the new normal with GPUs, especially for people like me who buy whatever the best mid-range value is.
 
Ok this makes enough sense. Pretty interesting to see you needed a separate chip back then to do floating point math in a timely manner back then, kinda like a dGPU.
We have really good analogies for this in modern hardware. Neural accelerators for example. It's possible to run inferencing on conventional CPUs, but it's terribly slow. What's been done the last few years is putting neural accelerators on M.2 cards, like the Google Coral, but the new generations of consumer processor will have these components built in. Much the same way floating point accelerators were handled back in the day.

Most home users didn't need floating point hardware, so they could just not plug one in and the few times they needed to do floating point the computer would just do it (terribly slowly) in software. Think the difference between drawing a curve with pen and paper, vs simply typing the formula into a graphing calculator. The processor could buckle down and do the math with pen and paper if it had to, it would just take a bit longer, which was fine as long as you weren't doing floating point all the time the way engineers and academics who needed the accelerators were. Then later floating point became common enough even for home users that it made sense to just integrated the FPU in the processor. Just replace floating point with inferencing and you have basically the same situation today.
 
We have really good analogies for this in modern hardware. Neural accelerators for example. It's possible to run inferencing on conventional CPUs, but it's terribly slow. What's been done the last few years is putting neural accelerators on M.2 cards, like the Google Coral, but the new generations of consumer processor will have these components built in. Much the same way floating point accelerators were handled back in the day.

Most home users didn't need floating point hardware, so they could just not plug one in and the few times they needed to do floating point the computer would just do it (terribly slowly) in software. Think the difference between drawing a curve with pen and paper, vs simply typing the formula into a graphing calculator. The processor could buckle down and do the math with pen and paper if it had to, it would just take a bit longer, which was fine as long as you weren't doing floating point all the time the way engineers and academics who needed the accelerators were. Then later floating point became common enough even for home users that it made sense to just integrated the FPU in the processor. Just replace floating point with inferencing and you have basically the same situation today.
I mean it makes enough sense. It's the way of technology on a lot of ways. First this floating point stuff was purely industrial because the chips were just so big. Then things shrunk. Got smaller, faster, enough to make it feasible to make consumer grade versions. Funny enough it reminds me of the microwave oven. Things used to be monsters. Now they come built into every house.
 
Does anyone else get all giddy after maxing out all the storage options your motherboard offers? Through the 5ish years I've had my ryzen computer, I've populated all avenues for storage that my case can handle. All 6 of my sata ports are used and all free pcie lanes have been populated by nvme drives. The on board nvme has also been populated, obviously. The computer finally feels complete.
 
Does anyone else get all giddy after maxing out all the storage options your motherboard offers? Through the 5ish years I've had my ryzen computer, I've populated all avenues for storage that my case can handle. All 6 of my sata ports are used and all free pcie lanes have been populated by nvme drives. The on board nvme has also been populated, obviously. The computer finally feels complete.
Yes, I'm autistic about using all my storage. I have 2 NVMEs, 2 SATAs and 4 hard drives for 32TB of storage that I always try using up. I have 2 more SATA slots but I'm too cheap to buy more cables.
 
Does anyone else get all giddy after maxing out all the storage options your motherboard offers? Through the 5ish years I've had my ryzen computer, I've populated all avenues for storage that my case can handle. All 6 of my sata ports are used and all free pcie lanes have been populated by nvme drives. The on board nvme has also been populated, obviously. The computer finally feels complete.

While doing a lot of research for the new PC I just built, I realized that I could buy the top processor and RAM capacity that my old PC's motherboard would support for $25 on eBay and figured I'd be stupid NOT to do it. Well, it ended up being way more of a performance boost than I'd expected, to the point that I probably could've squeezed a couple more years out of it if I'd known.

But it's sure gonna be one hell of a server and torrent box for years to come.
 
Does anyone else get all giddy after maxing out all the storage options your motherboard offers? Through the 5ish years I've had my ryzen computer, I've populated all avenues for storage that my case can handle. All 6 of my sata ports are used and all free pcie lanes have been populated by nvme drives. The on board nvme has also been populated, obviously. The computer finally feels complete.
I don’t use SATA in my computer, only in my server, but I do have four 2TB M.2 SSDs in a storage pool. Is quick.
 
  • Like
Reactions: Car Won't Crank
Does anyone else get all giddy after maxing out all the storage options your motherboard offers? Through the 5ish years I've had my ryzen computer, I've populated all avenues for storage that my case can handle. All 6 of my sata ports are used and all free pcie lanes have been populated by nvme drives. The on board nvme has also been populated, obviously. The computer finally feels complete.
I did that once for my computer with a core V1 case. Had spots for two 2.5" drives and two 3.5" drives and filled them all plus the NVMe SSD. Eventually I realized I didn't need that much storage space or complexity.
 
Storage?
11-219-021-12.jpg
I did finally break down and upgrade from a pair of 8 port Controllers + 4 motherboard ports to a 24 port controller to free up a slot so I could shove a spare video card in it so it can do AI in its spare time.
That's the backup server, where all the old drives go and get snapraided. The main one has higher density drives and only 12+4 slots.
 
I remember hearing about floating point numbers first when reading about the OG Xbox. What are they generally and what makes them difficult for a core to process?
Other people have explained how they work and what they're used for, but what makes them harder to process than integers comes down to the fact that the actual digital logic for doing floating point operations is an order or more of magnitude more complex.

For example, in integer representation, an adder circuit is 5 logic gates per bit (2 XORs, 2 ANDs, and a NAND). Subtraction is also easy - you simply flip all the bits on the subtrahend, add one, and then add that value to the integer you're subtracting from (this entire scheme is called Two's Complement if you want to look into the elegance of representing integers in binary).

In contrast, a floating-point adder is not simple. More values need to kept track of, bit strings need to be shifted around (just this shifting logic by itself contains more gates than a complete 32-bit full adder circuit). IEEE-754 has its own elegance but it is innately much more complex.
 
Back