By chance, I found out about the existence of the clock_gettime()
function for Linux systems. Since I'm looking for a way to measure execution time of a function, I tried it in the MinGW gcc 8.2.0 version on a Windows 10 64-bit machine:
#include <time.h>
#include <stdio.h>
int main() {
struct timespec tstart, tend;
clock_gettime(CLOCK_THREAD_CPUTIME_ID, &tstart);
for (int i = 0; i < 100000; ++i);
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &tend);
printf("It takes %li nanoseconds for 100,000 empty iterations.\n", tend.tv_nsec - tstart.tv_nsec);
return 0;
}
This code snippet compiles without warnings/errors, and there are no runtime failures (at least not written to stdout).
Output:
It takes 0 nanoseconds for 100,000 empty iterations.
Which I don't believe is true.
Can you spot the flaw?
One more thing:
According to the N1570 Committee draft (April 12, 2011) of the ISO/IEC 9899:201x, shouldn't timespec_get()
take the role of clock_gettime()
instead?
That loop should get optimized out to nothing at all, so with a low resolution clock (resolution is not necessarily individual nanoseconds; it may advance in much larger units which clock_getres
should be able to tell you) 0 is a plausible result. But you have a few other bugs in your code like mixing CLOCK_THREAD_CPUTIME_ID
with CLOCK_PROCESS_CPUTIME_ID
and not checking the return value of clock_gettime
(it might be telling you these clocks aren't supported).