Search code examples
cmathunsignedsigned

In C, How do I calculate the signed difference between two 48-bit unsigned integers?


I've got two values from an unsigned 48bit nanosecond counter, which may wrap.

I need the difference, in nanoseconds, of the two times.

I think I can assume that the readings were taken at roughly the same time, so of the two possible answers I think I'm safe taking the smallest.

They're both stored as uint64_t. Because I don't think I can have 48 bit types.

I'd like to calculate the difference between them, as a signed integer (presumably int64_t), accounting for the wrapping.

so e.g. if I start out with

x=5

y=3

then the result of x-y is 2, and will stay so if I increment both x and y, even as they wrap over the top of the max value 0xffffffffffff

Similarly if x=3, y=5, then x-y is -2, and will stay so whenever x and y are incremented simultaneously.

If I could declare x,y as uint48_t, and the difference as int48_t, then I think

int48_t diff = x - y; 

would just work.

How do I simulate this behaviour with the 64-bit arithmetic I've got available?

(I think any computer this is likely to run on will use 2's complement arithmetic)

P.S. I can probably hack this out, but I wonder if there's a nice neat standard way to do this sort of thing, which the next person to read my code will be able to understand.

P.P.S Also, this code is going to end up in the tightest of tight loops, so something that will compile efficiently would be nice, so that if there has to be a choice, speed trumps readability.


Solution

  • You can simulate a 48-bit unsigned integer type by just masking off the top 16 bits of a uint64_t after any arithmetic operation. So, for example, to take the difference between those two times, you could do:

    uint64_t diff = (after - before) & 0xffffffffffff;
    

    You will get the right value even if the counter wrapped around during the procedure. If the counter didn't wrap around, the masking is not needed but not harmful either.

    Now if you want this difference to be recognized as a signed integer by your compiler, you have to sign extend the 48th bit. That means that if the 48th bit is set, the number is negative, and you want to set the 49th through the 64th bit of your 64-bit integer. I think a simple way to do that is:

    int64_t diff_signed = (int64_t)(diff << 16) >> 16;
    

    Warning: You should probably test this to make sure it works, and also beware there is implementation-defined behavior when I cast the uint64_t to an int64_t, and I think there is implementation-defined behavior when I shift a signed negative number to the right. I'm sure a C language lawyer could some up with something more robust.

    Update: The OP points out that if you combine the operation of taking the difference and doing the sign extension, there is no need for masking. That would look like this:

    int64_t diff = (int64_t)(x - y) << 16 >> 16;