Remember that computers are nothing more than enormous banks of switches. Flipping one switch causes other switches to flip, and all computer logic amounts to construction complex arrays of switches that do interesting things when you flip them in various patterns.
Now, in base two, five bits, the number 5 is represented thus:
0 0 1 0 1
This is ten:
0 1 0 1 0
This is twenty:
1 0 1 0
So to multiply 5 x 2, you can see all we had to do was shift everything to the left. To divide by 2, we shift to the right. You might guess that such an operation does not take a very complex array of switches, and you are right. Doing math with integers can be done with comparatively simple circuits. Doing math where the decimal can slide all over the place (watch the video
@snov linked) is more complicated and requires a much more complicated array of switches. It is so complex that, back when CPUs were manufactured on a 1500 nm process, you couldn't even fit all the needed transistors on the chip to do floating-point math. You had to have a whole separate accelerator just to do that. Here is an example of such an accelerator. This one was made by Intel in 1986, probably on their 1500 nm node.
Floating-point arithmetic is extremely important for precise calculations. Physics simulations and CAD software all use floating-point math heavily. 3D games from the 1990s actually do not do much floating-point math. They tend to rely on integer math instead. Turns out 32-bit integers are plenty precise to do 3D calculations when you're drawing to a 320x240 target. However, modern games typically use floating-point math to transform and manipulate 3D geometry.
A floating point unit can ordinarily do one operation every clock cycle. So if you want to add or multiply pairs of numbers, sans any other optimizations, a 3.5 GHz CPU can do that 3.5 billion times a second, and a 100 Mhz CPU can do that 100 million times a second.