I'm currently doing two implementations of an algorithm, one in C and the other in CUDA, and am planning to do a comparison between the two in terms of runtime. My question is, what would be the best C timer to use considering I'm going to be comparing runtimes in C and CUDA. For CUDA, I shall be using Events, and I've read about wall clock timers in C such as clock() and gettimeofday() as well as high-resolution timers such as clock_gettime(), but am unsure which C one to use if I'm going to be comparing my C times against CUDA times?
Thanks :-)
It's probably best just to stick to something relatively simple, I'd recommend gettimeofday, which will provide a timestamp with microsecond accuracy. Just record the time before and after doing your computation, then subtract the two. You can use the timersub macro to do this.