Search code examples
javatimetimestampepochnanotime

Why elapsed time computed over large time windows have up to 100+milli second difference between System.currentTimeMillis vs System.nanoTime


I am trying to create a KSUID with microsecond precession sortable within same JVM, as currentTimeMillis give only Millisecond accuracy, thought of using currentTimeMillis() and nanoTime() together, collecting both the values on first instant and from second instant onwards using nanoTime difference to compute elapsed microseconds since first Instant of time. Using Elapsed time computed from nanoTime and adding it to initial Instant millis so that we can compute Epoch with microsecond accuracy for any Instant of time after JVM startup(all within same JVM).

//statically collected initial nanoTime and currentMillis final long initTimeNanos = System.nanoTime(); final long initTimeMillis = System.currentTimeMillis();

//From here on when ever current time is needed at micro/nano second precision, computing elapsed time from nanoTime and adding elapsed millis to initialMillis and rest of the elapsed time gives microsecond precision time

final long elapsedNanoTime = System.nanoTime() - initTimeNanos; final double elapsedMillisFromNanoTime = elapsedNanos / (1000000.0);

//Monotonically increasing Timestamp millis(Epoch, we may not call this Epoch) since JVM start final long computedCurrentTimeMillis = initTimeMillis + elapsedMillisFromNanoTime; final long nanosPrecision = elapsedNanos % 1000000; //additional time precision from nanoTime

Thought of using these values to generate monotonically increasing KSUID's which can approximately represent current time, but guaranteed to be ordered based on creation time within same JVM, as currentTimeMillis don't provide monotonically increasing timestamp guarantee, thought of using this approach to generate monotonically increasing timestamp that is approximately near to real time stamp(may be few millis difference unless there is a leap second adjustment to epoch time). Expected couple of milli seconds variance of elapsed time computed with epoch and nanoTime, but the real variance is very frequently changing and observed up to 150 milli second variance between these two when I ran below test for 48 hours. Majority cases elapsed time calculated with nanoTime is higher than elapsed time computed with currentTimeMillis and it's observed from -2 milliseconds to +150 milli seconds.

final long elapsedMillis = System.currentTimeMillis() - initTimeMillis;//elapsed time millis from System.currentTimeMillis() final double varianceMillis = elapsedMillisFromNanoTime - elapsedMillis; //elapsed time varience

Am I missing anything about JVM time guarantees, is it wrong way to calculate monotonically increasing approximate timestamp?(This will not be used as real Epoch in the system, only used for generating UUID's which also represent approximate timestamp Instant within same JVM).

//Test class

package org.example;

import java.util.concurrent.TimeUnit;

public class Main {
    public static void main(String[] args) throws Exception {
        final long initTimeNanos = System.nanoTime();
        final long initTimeMillis = System.currentTimeMillis();
        System.out.println("Nanos: " + initTimeNanos);
        System.out.println("Millis: " + initTimeMillis);

        while (true) {
            final long currentNanos = System.nanoTime();
            final long elapsedNanos = currentNanos - initTimeNanos;
            final double elapsedMillisFromNanos = elapsedNanos / 1000000.0;
            final long elapsedMillis = System.currentTimeMillis() - initTimeMillis;
            final double varianceMillis = elapsedMillisFromNanos - elapsedMillis;
            if (Math.abs(varianceMillis) > 1) {
                System.out.printf("\nVariance Observed: %.6f\n", varianceMillis);
                System.out.printf("Elapsed Time: %.6fms (from System.nanoTime)\n", elapsedMillisFromNanos);
                System.out.printf("Elapsed Time: %dms (from System.currentTimeMillis)\n", elapsedMillis);
            }
            if (elapsedMillis > TimeUnit.HOURS.toMillis(48)) {
                break;
            }
            Thread.sleep(5000);
            System.out.print(".");
        }
    }
}

Why elapsed time variance is continuously changing? how much of maximum varience we can expect if JVM runs continuously for an year(any JVM gurantees on this for upper or lower bounds, used MAC and windows for test, MAC gave slow increase in varience, windows was much faster)?

I was expecting elapsed time varience of less than 10 milliseconds, less frequently changing varience. But actual observation was continuously changing varience, going up and down with max of 150 milliseconds observed in 48 hours


Solution

  • One explanation is NTP based time smear. More generally nanotime and cTM measure completely different things and you cannot mix them.

    nanotime has an arbitrary 0-point (getting a nanoTime of 0 has no particular meaning), and hence, there is no point to calling it at all, except for comparing what it returns to the result of a different nanoTime call. nanoTime tracks elapsed time, that's it.

    System.cTM fetches the system clock. If you edit your system's time settings such as with the posix date command, or in your system settings, that has no effect on nanoTime, but changes what System.cTM returns. cTM also generally much slower than nanoTime is. cTM also has a well defined 0 - that means UTC 1970 midnight jan 1st.

    The question: "I want the current time as per the system clock with nano second precision" is not possible on JVMs.

    Time smear is where some network time daemon notices your system clock is slightly off and instead of just editing your system clock to the right time, a smear is introduced: Say you're 400 milliseconds 'ahead' of real time. The NTP could just set your clock 400 millis backwards, but many logging systems assume that System.cTM doesn't go backwards (an incorrect, but widely applied, assumption).

    This can be 'fixed' by instead having time tick slower for a while: Every 100 milliseconds that pass, the NTP daemon 'holds' the clock on the same millis for a millisecond. This catches up 1 milli per 100, so in 400,000 millis (only 400 seconds), the clock is back in sync with the network and no logging is affected at all.

    However, that would, obviously, completely ruin any relationship between System.cTM and nanoTime!

    Most NTPs smear like this (they also smear forward - if your sys clock is behind it doesn't just jump ahead: That makes logs lie (making it seem like some gap between 2 events is much larger than it really was), so then every 100 millis the NTP makes the clock skips a milli, something like that, to catch up.

    ... but I need monotonically increasing timestamps!

    Then nanoTime is irrelevant. Do not use that.

    Have some centralized 'store' that provides IDs. One implementation:

    class TimeStore {
      long lastVal = 0L;
    
      public synchronized long get() {
        long mark = System.currentTimeMillis() << 4;
        return lastVal = Math.max(mark, lastVal + 1);
      }
    }
    

    This will return the current time, shifted left by 4 bits, and will fill the 16 values that this shifting 'frees up' to be capable of generating monotonically increasing values at the identical time up to 16 times; any further request in the same millisecond would sneak into the next millisecond.

    This probably isn't slower than nanoTime, synchronized notwithstanding.