If I would like to benchmark how different table definitions affect row insertion speed in SQL Server, I guess it's not sufficient to just time transaction from BEGIN to COMMIT: this only measures the time spend to append INSERTs to the (sequential) log. Right?
But the real I/O hit comes when the INSERTs are actually applied to the real table (a clustered index which might be slightly reorganized after the INSERTs). How can I measure the total time used, all inclusive? That is, the time for all the INSERTs (written to log) + the time used for updating the "real" data structures? Is it sufficient to perform a "CHECKPOINT" before stopping the timer?
Due to lack of response I will answer this myself.
As far as I can see in various documentation, I will reach all related disk activity
induced by a query by issuing a CHECKPOINT
. This will force-write all dirty pages to disk.
If nothing but the query to be measured is executed, the only dirty pages will be the ones touched by the query. The experiments performed seem to support this "theory".