My application needs absolute timestamp (i.e. including date and hour) with error below 0.5s. The server synchronises via NTP, but I still want to detect if the server clock is not well synchronised for whatever reason.
My idea is to use steady clock to validate the system clock. I assume that within a period of, say, 1 hour steady clock should deviate very little from the real time (well below 0.5s). I compare time measured with steady and system clocks periodically. If the difference between the two grows or jumps large, it may suggest NTP is adjusting the system clock, which may mean that some of the time values were incorrect.
This is an example code:
#include <iostream>
#include <chrono>
#include <thread>
int main() {
const int test_time = 3600; //seconds, approximate
const int delay = 100; //milliseconds
const int iterations = test_time * 1000 / delay;
int64_t system_clock = std::chrono::system_clock::now().time_since_epoch().count();
int64_t steady_clock = std::chrono::steady_clock::now().time_since_epoch().count();
const int64_t offset = system_clock - steady_clock;
for(int i = 0; i < iterations; i++) {
system_clock = std::chrono::system_clock::now().time_since_epoch().count();
steady_clock = std::chrono::steady_clock::now().time_since_epoch().count();
int64_t deviation = system_clock - offset - steady_clock;
std::cout<<deviation/1e3<<" µs"<<std::endl;
/**
* Here I put code making use of system_clock
*/
std::this_thread::sleep_for(std::chrono::milliseconds(delay));
}
}
Does this procedure make sense? What I'm not sure about in particular is stability of the steady clock. I assume that it might be subject only to a slight deviation due to imperfectness of whatever is the internal server clock, but maybe I'm missing something?
I was very positively surprised by the test results with the code above. Even if I set it to run for 8 hours the maximum deviation I saw was only –22µs, and only around 1µs for vast majority of the times.
This question has little to do with C++.
1) Whether this method has a chance to work depends on accuracy of your computer's internal clock. Cheap clock might drift a minute a day - which is way over 0.5sec per hour.
2) The method is unable to identify a systematic offset. Say, you are constantly behind by a second due to network lagging, ping, or some other issues. The method will display a negligible deviation in this case.
Basically, it can only tell if time measured is precise but provides little knowledge on the accuracy (google: accuracy vs precision). Also in comments were mentioned issues of the algo regarding general clock adjustment.