I am trying to understand why we have polymorphism / dynamic binding with overridden methods, but not overloaded methods. I understand that there’s indirection that allows us to index into a vtable to allow us to do this in a language like C++ or Java. I’m curious why the same isn’t the case for resolving overloaded methods — my intuition leads me to believe we could have a level of indirection that allows us to determine at runtime which overloaded method to invoke based on the runtime type.
I’m wondering if this design decision was made for performance reasons or if there’s extra complexity I’m neglecting to consider.
I have not read the minds of the language designers, so cannot really tell you. I am thinking of it in this way:
As an observation (possibly a minor one), with runtime resolution of overloaded methods we could no longer statically type check the return value. Imagine we have
public boolean foo(Number n);
public String foo(Integer i);
Now I would find it perfectly natural to call the former foo()
like this:
boolean result = foo(myNumber);
Now if myNumber
happened to be an Integer
, the latter foo()
would be called instead. It would return a String
and I would have a type conversion error happening on runtime. I would not be amazed.
… why then we can still have runtime polymorphism and it be considered static typing, but it wouldn't be if we did dynamic resolution of overloaded methods.
Java has both: a static type and a runtime type. When I store an Integer
into a variable declared to be a Number
, then Number
is the static type and Integer
is the runtime type (BTW a type is not the same as a class). And you are correct, when I do myObject.foo(arg)
, then the runtime type of myObject
decides which implementation of foo()
gets called. And conceivably the runtime type of arg
could have been involved in the decision too. It would get more complicated, and I am unsure about the gain.