I know that we should treat the digits differently before and after the decimal point when converting the number into binary, e.g., 0.625 should be converted into 0.101. For digits after decimal points, we keep multiplying 2 and get its integer part as follows:
0.625 * 2 = 1.25 ---- 1
0.25 * 2 = 0.5 ---- 0
0.5 * 2 = 1 ---- 1
However, this method is not feasible for numbers like 0.1, since the loop is infinite and will give results like 0.0001100110011..., which leads to loss of precision.
So, why not we just treat the decimals as integers by removing the decimal point? E.g. for 0.625, we just directly calculate the binary representation of 625, and record the exponent just as the float type does (10^-3 here). This method can prevent the loss of precision for many cases, and it perfectly simulates how human calculate decimals.
If the original decimal is long, we can just cut it at maximal length, which didn't lose much precision. I don't really know why we must use "mulitply 2" method, which introduces errors for simple numbers like 0.1.
I've tried to convert many decimals into binary, and my method works better than the "multiply 2" method in most cases. Please tell me what I omitted in this process.
Please tell me why I'm wrong. Thanks!
what you proposing is called fixed point with power of 10 scaling factor. Its doable and its also used to prevent rounding errors (for example for currency computations)
however using normal binary representation (or power of 2 scaling with fixed point) is faster and much more convenient (even from HW perspective) as it simplifies many operations (*,/,pow,log,exp,...)
Also this same rounding problem arises with decadic base too just try to write 1/3
in decadic ... its also never ending series of digits ...
1/3 = 0.33333333333333333333333333...