Search code examples
c++timecastingdoublebit-manipulation

Is there a reliable method of passing microsecond time in a double?


I'm trying to work through a rather annoying issue, where I need report data with sub-millisecond precision time stamps. The problem, is that the timestamp message field is a typedefed double. I am only packing this message for a single receiving client, but this message is part of a standardized data service and I cannot modify the field any time soon.

I'm considering the consequences of copying two 32 bit values representing UTC seconds and microseconds to a double as shown below. I don't really like it but it does seem to work. However, what I can't figure out is why it does not work when I reverse the seconds and microseconds. If I pack it the other way the second time is always right, but microsecond time varies by seemingly random values from (+/-) 1 - 500us

timeval tv;
gettimeofday(&tv, nullptr);
std::cout << " Original seconds: " << tv.tv_sec << "\n";
std::cout << " Original useconds: " << tv.tv_usec << "\n";
uint64_t ts = ((tv.tv_usec << 32) | tv.tv_sec);
double dts = (double)ts;
unsigned useconds = ((uint64_t)dts >> 32);
unsigned seconds = ((uint64_t)dts & 0x00000000FFFFFFFF);
std::cout << " Final seconds: " << seconds << "\n";
std::cout << " Final useconds: " << useconds << "\n";
std::cout << "diff: " << useconds - tv.tv_usec << "\n";

After looking at the IEEE-754, I'm even more confused. It seems like if any bits would be misrepresented it would be the higher bits.

Anyway, I'd love to hear an explanation for this behavior and any advice on how accomplish this reliably.


Solution

  • To see why this happens, you have to consider what the value of ts will be prior to being converted into a double. In the code here, with

    uint64_t ts = ((tv.tv_usec << 32) | tv.tv_sec);
    

    the value of ts will be at most 1000000•232. A 64-bit double uses a 53-bit mantissa to store the digits of the stored value. This means that for values less than 253, the double will be accurate to the nearest integer, so no data will be lost upon conversion to double. For larger values, a double can only store 53 bits of precision, preventing some whole numbers from being represented.

    When you switch the microseconds and seconds, the value of ts becomes about (1.6 billion)•232, or about 262. Because only 53 binary digits of precision are stored upon conversion to double, this number is rounded to the nearest 29=512, causing the error you're seeing.

    If I'm understanding correctly what you're trying to do, you need microsecond accuracy in your timestamps, so have you considered just calculating the total timestamp in microseconds, like this?

    uint64_t ts = tv.tv_sec * 1000000 + tv.tv_usec;
    

    This would guarantee microsecond precision, without getting anywhere close to the 253 limit of a double with perfect precision.