Given a double precision floating-point (non-negative) number x
, does square root of its square always equals to itself?
In other words, is there any loss of precision if doing the following:
x = <non-negative double>
y = x^2
z = sqrt(y)
so that:
x == z
I'm not interested in the case when the square becomes infinity or zero, just numbers that fit the double-precision.
#include <stdio.h>
#include <math.h>
int main(void) {
double x = 1.0000000000000001E-160;
double square = x*x;
double root = sqrt(square);
if (root != x) {
printf("%.20g\n", x);
printf("%.20g\n", root);
}
}
outputs
1.0000000000000001466e-160
9.9999443357584897793e-161
What's happening here is that x
is large enough that its square is non-zero, but small enough that its square is only representable as a denormalised number, which reduces the available precision.
I get the impression that @MarkDickinson's comment on @LưuVĩnhPhúc's answer is largely correct though. If both x
and x*x
are positive normalised numbers, then I'm not able to find examples where x != sqrt(x*x)
even with a quick brute force (over a few small ranges), though this should not be considered proof.