I'm currently writing a driver for MongoDB, so I have to dig a little deeper and so I find this:
BSON spec for DateTimeUTC:
"\x09" e_name int64
BSON spec for int64:
"\x12" e_name int64
BSON spec for timeStamp (although I know its almost always used internally, its just to show BSON makes use of unsigned integers):
"\x11" e_name uint64
It seems a bit controversial to me. Why are int64 and utc millis even separated? Does mongoDB use different ways to compare different BSON dateTimeUTCs?
And why is dateTimeUTC NOT a uint64 but a signed integer? millis are always > 0. Is there a reason behind this? Am I missing something?
DateTimeUTC is used to represent a point in time. It predates BSON, and has historically been using signed integer. This is to enable the use of DateTimeUTC to point to a date before the epoch. Otherwise, it won't be possible to represent dates before 1970-01-01 using DateTimeUTC.
In contrast, timestamp is for mostly internal use, and is expected to be used for mostly current dates that have little need to represent a time before the epoch (e.g. the timestamp of an operation).
There's a related question in UNIX StackExchange regarding this: Why does Unix store timestamps in a signed integer?