Imagine a project in which there is an interface class like the following:
struct Interface
{
virtual void f()=0;
virtual void g()=0;
virtual void h()=0;
};
Suppose that somewhere else, someone wishes to create a class implementing this interface, for which f
, g
, h
all do the same thing.
struct S : Interface
{
virtual void f() {}
virtual void g() {f();}
virtual void h() {f();}
};
Then it would be a valid optimisation to generate a vtable for S
whose entries are all pointers to S::f
, thus saving a call to the wrapping functions g
and h
.
Printing the contents of the vtable, however, shows that this optimisation is not performed:
S s;
void **vtable = *(void***)(&s); /* I'm sorry. */
for (int i = 0; i < 3; i++)
std::cout << vtable[i] << '\n';
0x400940
0x400950
0x400970
Compiling with -O3
or -Os
has no effect, as does switching between clang and gcc.
Why is this optimisation opportunity missed?
At the moment, these are the guesses that I have considered (and rejected):
Such optimization is not valid because...
// somewhere-in-another-galaxy.hpp
struct X : S {
virtual void f();
};
// somewhere-in-another-galaxy.cpp
include <iostream>
void X::f() {
std::cout << "Hi from a galaxy far, far away! ";
}
If a compiler implements your optimization this code would not work.
Interface* object = new X;
object->g();
A compiler of my translation unit does not know about your class internal implementation so for g() and h() it just puts in my class' virtual functions table references to the corresponding entries in your class' VFT.