If I have this code:
class MyClass;
class Child
{
public:
void ExecuteChild()
{
parent->ExecuteParent(); //This function
}
MyClass* parent;
};
class MyClass
{
public:
MyClass
{
child = newChild();
chilt->parent = this;
}
void ExecuteParent()
{
//does something
}
Child* child
};
std::vector<MyClass*> objects;
int num = GetRandomNumberBetween5and10();
for(int i = 0; i < num; i++)
{
objects.push_back(new MyClass());
}
for(int i = 0; i < num; i++)
{
objects[i]->Execute());
}
Under a modern C++ compiler with all optimizations enabled, is there any possibility for Child::ExecuteParent() to be inlined?I'm asking this, because I have such a similar case in my project at a VERY performance-intense spot and I must know if there is any point in continuing this design.
I suppose in principle that method call could be inlined, because the compiler does know the class of objects[i]
.
I would be surprised if it actually does it.
When you have what you call a "VERY performance-intensive spot" I start to get a whiff of premature optimization, which I define as solving a performance problem before you know for sure if it is one.
The thing about performance issues is what makes them so is you don't know where they are. They stow away in your code without your knowledge. Sometimes trying to solve an imagined performance issue causes the creation of performance issues. What's more, there is never just one of them (in my experience).
For example, suppose you have three performance problems:
So what's your strategy?
If you prematurely fix B, you will save 25%, for a 1.33x speed gain over not having fixed it. You could be happy with that, but...
If you follow a process of diagnosis, which I recommend, it doesn't mean don't fix B. It means first let the diagnosis surprise you by pointing out A. If you fix that first, you save 50%, which gives you a 2x speed gain.
What's more, when you do the diagnosis again, it says now B is taking 50% of the time, not 25%, so it not only tells you you were right, fixing it gives you another 2x speed gain, not just 1.33x. So after fixing A and B, you are 4x faster. (Of course, if you fixed B first and then fixed A, you'd end up at the same place.)
Finally, whether or not you had guessed that C was a problem but not a very big one, now it is a big one because you already fixed A and B. Now C takes 50% of the time, not 12.5%, so fixing it gives you yet another 2x speedup. So now you're 8x faster than you were at the beginning.
This is all a result of the method you use for finding the problems you didn't anticipate. Those are where the money is. Here's an example of a 730x speedup, by fixing a succession of six problems, some of which, at the beginning, were really small, but together, they add up to over 99.8% of the time.
Of course, you learn from this, and you avoid the pitfalls and write faster code to start with, which I suppose you could call premature optimization :)