Say we have a class Foo
like
class Foo {
private int attr1;
private String attr2;
// getters and setters.
// hashCode and equals not overrided.
}
So while adding references of Foo
to a Set or a Map (as a key) duplicates will be identified based on their address locations. Now if I override hashCode
and equals
based on attr2
, duplicates will be identified based on the value
of attr2
. That's how duplicate filtration works in Java - Look for any user defined mechanism, if present use that otherwise use the default mechanism.
If we try to add references of Foo
to a sorted collection like TreeSet
, TreeMap
it will throw ClassCastException
saying that there is no comparison mechanism. So we can make it either Comparable
or Comparator
type and can define a comparison mechanism.
So my question is while finding duplicates, if the user hasn't defined any mechanism Java will look for default mechanism, but while sorting or comparing it insists user to define a mechanism. Why won't it go for a default mechanism, for example comparing references based on their hashcode
? Is it because any OOPs concept or any concept in Java may be violated if they go for a default comparison?
It is sensible to say, lacking any other information, that objects are the same only if they are physically the same. This is what the default equals
and hashCode
do.
It is not sensible, and indeed makes no sense, to say that one object is "bigger" than another because a digest of its memory location is bigger.
Even more dammingly for your proposed mechanism, in modern Java the old adage that a hashCode
is a memory location is actually incorrect. An Object
is assigned a random number at construction and that is used as the hashCode
.
So you are really proposing:
"Lacking any information about the objects to be ordered, we will order them completely at random"
I cannot think of any situation where this default behaviour is: