I am trying to get nanosecond resolution from CLOCK_REALTIME
on a 1GHz MIPS router. When I compile the below code for x86 and run on a 1GHz vm, I get nanosecond resolution. When I compile for mips and run on a 1GHz router, it appears to round up to the microsecond.
#include <stdio.h>
#include <time.h>
int main(int argc, char **argv, char **arge) {
struct timespec tps, tpe;
if ((clock_gettime(CLOCK_REALTIME, &tps) != 0)
|| (clock_gettime(CLOCK_REALTIME, &tpe) != 0)) {
perror("clock_gettime");
return -1;
}
printf("%lu s, %lu ns\n", tpe.tv_sec-tps.tv_sec,
tpe.tv_nsec-tps.tv_nsec);
return 0;
}
Here are sample times:
vagrant@vagrant-ubuntu-trusty-64:~$ gcc clock.c -o clock_x86 -static -lrt
vagrant@vagrant-ubuntu-trusty-64:~$ ./clock_x86
0 s, 221 ns
vagrant@vagrant-ubuntu-trusty-64:~$ mipsel-openwrt-linux-gcc clock.c -lrt -static -o clock_mips
<...scp...>
root@OpenWrt:~# ./clock_mips
0 s, 3000 ns
Is there something I am misunderstanding about clocks, MIPS, processors, etc? I would expect a call to CLOCK_REALTIME
to produce nanosecond granularity/resolution, regardless of processor architecture.
Further, if anyone could shed light on how these POSIX timers are implemented and how they are getting their measurements, I would appreciate it.
The guaranteed granularity of the clocks is specified as at least 20ms in POSIX.1-2008. Here's the snippet that is germane:
The maximum allowable resolution for CLOCK_REALTIME and CLOCK_MONOTONIC clocks and all time services based on these clocks is represented by {_POSIX_CLOCKRES_MIN} and shall be defined as 20 ms (1/50 of a second). Implementations may support smaller values of resolution for these clocks to provide finer granularity time bases. The actual resolution supported by an implementation for a specific clock is obtained using the clock_getres() function. If the actual resolution supported for a time service based on one of these clocks differs from the resolution supported for that clock, the implementation shall document this difference.
Basically, you need to call clock_getres() to determine what the implementation-specified resolution actually is. Don't do anything that assumes that it is more granular than 20ms.