Search code examples
algorithmmessagingdistributed-computingdistributedcrdt

Can I use Unix time for a Lamport timestamp?


To the best of my understanding, a lamport timestamp is a tool used to ensure events across multiple sites have a partial ordering.

From wikipedia:

In pseudocode, the algorithm for sending is:

time = time + 1;
time_stamp = time;
send(message, time_stamp);

The algorithm for receiving a message is:

(message, time_stamp) = receive();
time = max(time_stamp, time) + 1;

Is possible for the timestamps to be unix timestamps, which increments automatically based on time and not events? If each site uses unix timestamps, doesn't that mean that events are still partially ordered locally? Would I have to alter/omit the algorithm for receiving a message, or is it just wrong to use unix timestamps entirely?


Solution

  • Unfortunately, Unix timestamps are not guaranteed to be monotonically increasing, unless you use the MONOTONIC_CLOCK (which is not guaranteed to exist on a given system, but in fact is widely implemented).

    Even if the monotonic clock is supported, it is not guaranteed that two successive calls to clock_gettime will return different values, if not enough time has elapsed between the two calls.

    Since the time() system call does not use the monotonic clock, if by "Unix timestamp" you mean "the value returned by time(NULL)", then the answer is "it's definitely wrong to use the Unix timestamp."

    If you are using the monotonic clock, then you additionally need to globally track returned values in order to guarantee that each call produces a distinct value. But in that case, you might as well just use a counter. It is also worth noting that the monotonic clock does not count from the epoch (commonly they count from system boot, although the standard leaves it entirely open), so you cannot deduce anything from comparing monotonic clock values from two different systems.