Search code examples
ctime

Measuring time takes some time, how to know which "point of time" is being measured?


Consider the following call:

clock_gettime(CLOCK_MONOTONIC_RAW, &ts_beg);

On my machine, that call takes in average 25ns. Should I assume that the time you get have an error of ±12.5ns? Or that function (and in general, time-measuring functions) are "designed"/adjust to give you the time at function exit or whatever? In general, what can I say about the time point given by the function, considering that its execution takes some time as well?


Solution

  • On my machine, that call takes in average 25ns. Should I assume that the time you get have an error of ±12.5ns?

    Not even remotely close.

    First of all, CLOCK_MONOTONIC_RAW has no rate-correction or drift-correction, so your precision and accuracy may be arbitrarily bad. Realistically, it may be off-rate by as much as 5% and the non-raw clock would all still behave sane in the presence of a NTP server, but CLOCK_MONOTONIC_RAW lets you experience that rate-error first-hand.

    Don't confuse the permitted 0.05% drift rate limit compared to NTP for CLOCK_REALTIME/CLOCK_MONOTONIC (intended to be used for offset-correction) with the actual rate factor between the raw and the NTP synchronized clock types. The later may exceed your expectation by a lot.

    Secondly, you have no guarantees whatsoever about the granularity of that clock. It might just update once per millisecond, or whatever the backing hardware real time clock uses for granularity. In fact, on most systems CLOCK_MONOTONIC is properly interpolated between ticks of the realtime hardware clock by the use of additional timers (e.g. CPU cycle count), while CLOCK_MONOTONIC_RAW does not benefit from this.

    So assuming a round trip time of 25ns, a granularity of 1ms and a drift rate of up to 5%, your actual error with CLOCK_MONOTONIC_RAW is more around ±(1ms+12.5ns)*1.05, which is leaps and bounds worse than what you estimated based on the round trip time alone.