I was looking at the implementation of compare(double, double) in the Java standard library (6). It reads:
public static int compare(double d1, double d2) {
if (d1 < d2)
return -1; // Neither val is NaN, thisVal is smaller
if (d1 > d2)
return 1; // Neither val is NaN, thisVal is larger
long thisBits = Double.doubleToLongBits(d1);
long anotherBits = Double.doubleToLongBits(d2);
return (thisBits == anotherBits ? 0 : // Values are equal
(thisBits < anotherBits ? -1 : // (-0.0, 0.0) or (!NaN, NaN)
1)); // (0.0, -0.0) or (NaN, !NaN)
}
What are the merits of this implementation?
edit: "Merits" was a (very) bad choice of words. I wanted to know how this works.
@Shoover's answer is correct (read it!), but there is a bit more to it than this.
As the javadoc for Double::equals
states:
"This definition allows hash tables to operate properly."
Suppose that the Java designers had decided to implement equals(...)
and compare(...)
with the same semantics as ==
on the wrapped double
instances. This would mean that equals()
would always return false
for a wrapped NaN. Now consider what would happen if you tried to use a wrapped NaN in a Map or Collection.
List<Double> l = new ArrayList<Double>();
l.add(Double.NaN);
if (l.contains(Double.NaN)) {
// this wont be executed.
}
Map<Object,String> m = new HashMap<Object,String>();
m.put(Double.NaN, "Hi mum");
if (m.get(Double.NaN) != null) {
// this wont be executed.
}
Doesn't make a lot of sense does it!
Other anomalies would exist because -0.0
and +0.0
have different bit patterns but are equal according to ==
.
So the Java designers decided (rightly IMO) on the more complicated (but more intuitive) definition for these Double methods that we have today.