Why is the difference between two time_t
s returned by the difftime( time_t t1, time_t t2)
method as a double
? I don't see where the precision requirement comes from.
Because time_t
is simply defined in the standard as an arithmetic type capable of representing times
.
That's about all it says about it. It doesn't have to be an integer, it doesn't have to represent seconds. It could only ever be a multiple of ten or it may be a floating point type capable of representing times down to a resolution of 10-43 seconds.
The quote from C99 7.23.1 Components of time
is (slightly paraphrased):
Types declared are
clock_t
andtime_t
, which are arithmetic types capable of representing times. The range and precision of times representable in clock_t and time_t are implementation-defined.
Hence people who blindly work out the time difference with:
delta = time_end - time_begin;
may find that their code doesn't work on all platforms.
Now I don't know off the top of my head any platforms where it's not a simple seconds since the epoch but I've been bitten by assumptions like this before, such as assuming A
through Z
are contiguous whereas, in fact, that is not required and doesn't work so well on the mainframe products which use EBCDIC. And, yes, they're still in heavy use despite apparently having been dying since the '60s :-)
The C99 rationale document has this to say:
The types
clock_t
andtime_t
are arithmetic because values of these types must, in accordance with existing practice, on occasion be compared with -1 (a "don’t-know" indication), suitably cast.No arithmetic properties of these types are defined by the Standard, however, in order to allow implementations the maximum flexibility in choosing ranges, precisions, and representations most appropriate to their intended application. The representation need not be a count of some basic unit; an implementation might conceivably represent different components of a temporal value as subfields of an integer type.