I have an std::vector of strings holding UTC date time with this format YYYYMMDD-HH:MM:SS.sss
.
I want to check the number of elements pushed in the same second and log the rate (nb of entries/second) each time I push a new value.
I'm using date library (of Howard Hinnant).
I have tried it this way:
vector v = {"20240222-12:12:45.123"};
utc_time<milliseconds> tp;
vector<decltype(tp)> v2;
for (std::istringstream iss : v)
if (iss >> parse("%Y%m%d-%T", tp))
v2.push_back(tp);
// then after a new insertion.
for (auto tp : v2)
if (v2.back() - tp <= 1s)
++count;
else
break;
std::cout << count;
my questions are as follow:
For my use case, do I need to consider time point types such as UTC time versus system time?
Is time point more relevant than duration, and is it the most efficient way? Are there alternative approaches to handling this?
For my use case, do I need to consider time point types such as UTC time versus system time?
Prefer sys_time
over utc_time
if you're unsure which to pick. sys_time
is more efficient, and generally equivalent. Choose utc_time
if you know that you need to deal with leap seconds. For example in your case if you know that some of your time stamps will contain 60 in the seconds field, then you need to use utc_time
.
Is time point more relevant than duration, and is it the most efficient way? Are there alternative approaches to handling this?
Time points are neither more or less relevant than durations. They are simply two measures of time that humans use.
Besides the preference of sys_time
for efficiency purposes, I do not see any other sources of inefficiency in your code.
Disclaimer: I attempted to compile your code to study it further, and it does not compile. I did not study it further. If you would like to further clarify your question, I am happy to try to address it.