Search code examples
javaoopfunctional-programmingpolymorphismdynamic-dispatch

When is dynamic polymorphism necessary (compared with static polymorphism)?


I come from functional languages (e.g. Haskell) and I enjoy a lot on typeclasses to achieve polymorphism which is also a structural approach to implement ad-hoc overloading.

However, recently I'm starting to understand OOP's way to model real problems and I'm curious why do we need dynamic polymorphism in OOP languages (such as Java). In my experience, most of function call can be resolved during compile time as many functional languages do not support subtyping.

So my problem is that, In what kind of situation do we have to use dynamic polymorphism instead of compile-time polymorphism? My guesses are:

  • When we use the subtype system where we have objects we cannot decide its actual type (e.g. we have a container containing many objects of various types. However, in this situation, why not try Algebraic data type or union type to model the container's element type?).
  • We only have the object and we do not know its methods' real name, so we have to use the vptr table to help us.

Solution

  • With many people's help, currently I've got some of answers I want after reflecting lots of designs. Since Rust has both nice support for static and dynamic polymorphism, I shall use Rust in this answer to demonstrate my points.

    I now have 2 points for dynamic dispatch: user-friendly scalability and smaller compiled size .

    Point 1: user-friendly scalability

    Many people argue that dynamic dispatch is suitable for a situation where you have a container to collect various kinds of objects(of course, different types). For example:

    trait MyTrait {
      fn method(&self, ...) -> ReturnType;
    }
    
    type MyContainer = Vec<Box<MyTrait>>;
    
    fn main() {
      ...
      //the_container : MyContainer
      the_container.iter().map(... { x.method(...) } ...) //only demo code
    }
    

    In code above, on compile time, we only know that the elements are trait objects, which means the program shall use a vptr-like strategy to find which method to use during executing the expression in the main function.

    However, there's another way to implement nearly the same thing:

    enum MyContainerTypes {
      Type0 A,
      Type1 B,
      ...
    }
    
    impl MyTrait for MyContainerType {
      fn method(&self, ...) -> ReturnType {
        match self {
          Type0 a => {...},
          Type1 b => {...},
          ...
        }
      }
    }
    
    type MyContainer = Vec<MyContainerType>;
    
    fn main() {
      ...
      //my_container : MyContainer
      my_container.iter().map(... { x.method(...) } ...); //demo
    }
    

    In this way, no dynamic polymorphism is required, however, consider the following situation: You are a user of a library which has been designed and you have no access to change definitions like enums inside the library. Now you want to make your own type of ContainerType and you want to reuse codes of existed logic. If you are using dynamic dispatch, the work is simple: just make another impl of your custom container type and everything's fine. Unfortunately, if you are using the static version of the library, it may become a little hard to achieve this goal...

    Point 2: Smaller compiled size

    Languages like Rust may have to compile a generic function many times, once for each type it’s used with. This could make the binary large, a phenomenon called code bloat in C++ circles. Let's consider a simpler case:

    trait MyTrait {
      fn method(&self, ...) -> ReturnType;
    }
    
    fn f(x: MyTrait) { ... } //MyTrait is unsized, this is unvalid rust code
    fn g<T: MyTrait>(x: T) { ... }
    

    If you have lots of functions like function g, the compiled size may become larger and larger. However this should not be a big issue since most of us have the luxury of ignoring code size for plentiful memory nowadays.

    Conclusion

    In short, although static polymorphism has many advantages over dynamic polymorphism, there're still some corners dynamic dispatch can work better. Personally I really love Haskell-like's way to treat polymorphism(that's also why I like Rust). I don't think this can be the final best and complete answer, discussions are welcome!

    Combining strategies

    It suddenly occurred to me that why not combine the static and dynamic strategies? To allow users to further extend our model, we can just leave a small hole for users to fill in later, like:

    trait Custom {
      fn method(&self) -> ReturnType;
    }
    
    enum Object {
      A(Type0),
      B(Type1),
      ...
      Hole(Box<dyn Custom>)
    }
    

    However, in this way, some operations like clone may be a little hard to implement, but I think this is still an interesting idea.

    Update

    Haskell's existential type also has similar function and implementation as dynamic polymorphism in OOP languages:

    data Obj = forall a. (Show a) => Obj a
    
    xs :: [Obj]
    xs = [Obj 1, Obj "foo", Obj 'c']
    
    doShow :: [Obj] -> String
    doShow [] = ""
    doShow ((Obj x):xs) = show x ++ doShow xs
    

    I also found that this existential type can be used to hide some details of types and provide cleaner interface for users to use.

    Edit

    Thanks @MarkSaving. There's a mistake in Point 2's code, the dyn trait object is unsized and therefore should be changed to a reference or boxed dyn:

    fn f(x: Box<dyn MyTrait>) { ... }