Search code examples
performanceunit-testingversion-controlintegration-testing

What is proper practice for performance rules testing?


I know that what we're doing is incorrect/strange practice.

We have an object that is constructed in many places in the app, and lags in its construction can severely impact our performance.

We want a gate to stop check-ins which affect this construction's performance too adversely...
So what we did was create a unit test which is basically the following:

myStopwatch.StartNew()
newMyObject = New myObject()
myStopwatch.Stop()
Assert(myStopwatch.ElapsedMilliseconds < 100)

Or: Fail if construction takes longer than 100ms

This "works" in the sense that check-ins will not commit if they impact this performance too negatively... However it's inherently a bad unit test because it can fail intermittently... If, for example, our build-server happens to be slow for whatever reason.

In response to some of the answers; we explicitly want our gates to reject check-ins that impact this performance, we don't want to check logs or watch for trends in data.

What is the correct way to meter performance in our check-in gate?


Solution

  • To avoid the machine dependence, you could first time the construction of a "reference object" which has a known acceptable construction time. Then compare the time to construct your object to the reference object's time.

    This may help prevent false failures on an overloaded server since the reference code will also be slower. I'd also run the test several times and only require X% of them to pass. (since there are many external events which can slow down code, but none that will speed it up. )