When I create complex type hierarchies (several levels, several types per level), I like to use the final
keyword on methods implementing some interface declaration. An example:
interface Garble {
int zork();
}
interface Gnarf extends Garble {
/**
* This is the same as calling {@link #zblah(0)}
*/
int zblah();
int zblah(int defaultZblah);
}
And then
abstract class AbstractGarble implements Garble {
@Override
public final int zork() { ... }
}
abstract class AbstractGnarf extends AbstractGarble implements Gnarf {
// Here I absolutely want to fix the default behaviour of zblah
// No Gnarf shouldn't be allowed to set 1 as the default, for instance
@Override
public final int zblah() {
return zblah(0);
}
// This method is not implemented here, but in a subclass
@Override
public abstract int zblah(int defaultZblah);
}
I do this for several reasons:
template method
pattern. I don't want other developers or my users do it.So the final
keyword works perfectly for me. My question is:
Why is it used so rarely in the wild? Can you show me some examples / reasons where final
(in a similar case to mine) would be very bad?
Why is it used so rarely in the wild?
That doesn't match my experience. I see it used very frequently in all kinds of libraries. Just one (random) example: Look at the abstract classes in:
http://code.google.com/p/guava-libraries/
, e.g. com.google.common.collect.AbstractIterator. peek()
, hasNext()
, next()
and endOfData()
are final, leaving just computeNext()
to the implementor. This is a very common example IMO.
The main reason against using final
is to allow implementors to change an algorithm - you mentioned the "template method" pattern: It can still make sense to modify a template method, or to enhance it with some pre-/post actions (without spamming the entire class with dozens of pre-/post-hooks).
The main reason pro using final
is to avoid accidental implementation mistakes, or when the method relies on internals of the class which aren't specified (and thus may change in the future).