I understand that the fastest type between float
and double
depends on the native ALU implementation, which is usually based on double-precision. When you do computations based on the contrarian precision, the ALU must do the corresponding precision conversion all the time.
So, why has the standard choosen float
for representing the load_factor
? I guess it's to save memory thinking on containers of hash tables, but I would like to know if there are stronger reasons for that.
This happened in revision 3 of the original proposal:
Changed load factor operations to use
float
instead ofdouble
The rationale is later given (under "E. Control of Hash Resizing"):
Should the floating-point parameter be of type
float
, or of typedouble
? It makes very little difference. On the one hand,double
is typically the "natural" floating-point type that is used in the absence of a strong reason to the contrary. On the other hand,float
may allow the hash table implementation to save some space, and may alert users to the fact that the value will not be used in any context that involves high precision. I have chosenfloat
.
So basically what you said.
As for performance, there's a mention of this and how it doesn't really matter in the grand scheme of things (albeit in the context of defending the use of floating-point versus integers):
The cost of a runtime floating-point parameter is one floating-point multiplication at every rehash (not every insertion). Even with incremental hashing, this is almost certain to be dwarfed by the cost of a rehash.