When we talk about performance, benchmarks, execution time, and we tend to say that implementation A is N percent faster/slower than implementation B, what exactly we mean?
For example implementation A took 70 milliseconds, and B took 80 milliseconds.
80/.70-100 = 14.285714285714292
100-70/.80 = 12.5
This always puzzled me, is there a standard or common approach here?
It should be mathematic approach. You want to calculate simple percentage (how many A can fit into B...). Example:
I have 10 bananas, you have 5. So i have 200% of your bananas, but you have only 50% of mine.
A is 70/80 of B. So A is 12,5% faster than B.
B is 80/70 of A. So B is ~14% slower than A.