I did some timing tests and also read some articles like this one (last comment), and it looks like in Release build, float and double values take the same amount of processing time.
How is this possible? When float is less precise and smaller compared to double values, how can the CLR get doubles into the same processing time?
On x86 processors, at least, float
and double
will each be converted to a 10-byte real by the FPU for processing. The FPU doesn't have separate processing units for the different floating-point types it supports.
The age-old advice that float
is faster than double
applied 100 years ago when most CPUs didn't have built-in FPUs (and few people had separate FPU chips), so most floating-point manipulation was done in software. On these machines (which were powered by steam generated by the lava pits), it was faster to use float
s. Now the only real benefit to float
s is that they take up less space (which only matters if you have millions of them).