I write a simple code in Julia language:
a = BigFloat("10", precision=400)^(-10)
println(a)
The output is:
1.000000000000000000000000000000000000000000000000000000000000000000000000000003e-10
As you can see, there is a "3" after all zeros, and this is an error.
Is this a natural issue with BigFloat types, or is it one thing with the Julia way to do calculations?
I have been reading the documentation, but did not find anything there.
the test was made on site: replit/julia
It doesn't matter what precision you set, it's nothing wrong with BigFloat, nor with Julia. Binary floats cannot represent 0.1 (or 0.01, 0.001, etc.) exactly. If you try a regular Float64 value, you get
julia> BigFloat(0.1)
0.1000000000000000055511151231257827021181583404541015625
For a properly constructed BigFloat, it is better:
julia> BigFloat("0.1")
0.1000000000000000000000000000000000000000000000000000000000000000000000000000002
julia> BigFloat("0.1"; precision=1000)
0.100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002
And this never ends, not even for precision=16000, or precision=16000000.
If you look at the bitstring of 0.1, you can see the pattern
julia> bitstring(0.1)
"0011111110111001100110011001100110011001100110011001100110011010"
Binary fractional values must always be a expressible as a sum
sum(b[n]/2^n)
where each b[n]
is either zero or one. To get 0.1 exactly right, you need to repeat the pattern 1100110011001100...
, forever, so you need a vector b
that is infinitely long.