I've recently started reading about the Liskov substitution principle (LSP) and I'm struggling to fully comprehend the implications of the restriction that "Preconditions cannot be strengthened in a subtype". It would seem to me that this restriction is in conflict with the design principle that suggests that one should minimize or avoid entirely the need to downcast from a base to a derived class.
That is, I start with an Animal
class, and derive the animals Dog
, Bird
, and Human
. The LSP restriction on preconditions clearly fits with nature, in so far as no dog, bird, or human should be more constrained than the general class of animal. Sticking to LSP, the derived classes would then add special features, such as Bird.fly()
or Human.makeTool()
that are not common to Animal
.
It feels a bit absurd for the base class Animal
to have virtual methods for every possible feature of every possible animal subtype, but if it doesn't then I would need to downcast an Animal
reference to its underlying subtype to access those unique features. This need to downcast, however, is generally considered to be a red flag for bad design. Wikipedia even goes so far as to suggest that it's because of LSP that downcasting is considered bad practice.
So what am I missing?
Bonus question: Consider again the class hierarchy of Animals
described above. Clearly it would be an LSP violation if Animal.setWeight(weight)
required only a non-negative number, but Human.setWeight(weight)
strengthened this precondition and required a non-negative number less than 1000. But what about the constructor for Human
, which might look like Human(weight, height, gender)
? Would it be an LSP violation if the constructor imposed the limit on weight? If so, how should this hierarchy be redesigned to respect clear boundaries on physical properties of derived animals?
LSP is all about behavioral subtyping. Roughly speaking, B
is a subtype of A
if it can be always used where A
is expected. Moreover, such usage shouldn't change expected behavior.
So, considering LSP application, the main point is what "expected behavior of A
" is. In your example, it is Animal
. It is not that simple to design useful Animal
interface that common for all animals.
Sticking to LSP, the derived classes would then add special features, such as
Bird.fly()
orHuman.makeTool()
that are not common to Animal.
Not quite. LSP assumes that you deal only with Animal
s. As if it wouldn't be possible to downcast. So, your Human
, Bird
and other animals can have any methods, constructors or whatever. It is not related to LSP at all. They just should behave as expected when used as Animal
s.
The problem is that such interfaces are very limited. In practice, we often have to use type switching to let birds fly and people make useful tools.
Two common approaches in mainstream OOP languages are:
There is nothing wrong with downcasting in this context, because this is how you usually do type switching in languages that do not support native variant types. You can spend a lot of time introducing hierarchies of interfaces to avoid explicit downcasting, but usually it just makes code less readable and harder to maintain.