Search code examples
pythonc++python-3.xsleepsystem-clock

How to increase sleep/pause timing accuracy in python?


I ran an experiment to compare sleep/pause timing accuracy in python and C++

Experiment summary:

In a loop of 1000000 iterations, sleep 1 microsecond in each iteration.

Expected duration: 1.000000 second (for 100% accurate program)

In python:

import pause
import datetime

start = time.time()
dt = datetime.datetime.now()
for i in range(1000000):
    dt += datetime.timedelta(microseconds=1)
    pause.until(dt)
end = time.time()
print(end - start)

Expected: 1.000000 sec, Actual (approximate): 2.603796

In C++:

#include <iostream>
#include <chrono>
#include <thread>

using namespace std;

using usec = std::chrono::microseconds;
using datetime = chrono::_V2::steady_clock::time_point;
using clk = chrono::_V2::steady_clock;

int main()
{
    datetime dt;
    usec timedelta = static_cast<usec>(1);

    dt = clk::now();

    const auto start = dt;

    for(int i=0; i < 1000000; ++i) {
        dt += timedelta;
        this_thread::sleep_until(dt);
    }

    const auto end = clk::now();

    chrono::duration<double> elapsed_seconds = end - start;

    cout << elapsed_seconds.count();

    return 0;
}

Expected: 1.000000 sec, Actual (approximate): 1.000040

It is obvious that C++ is much more accurate, but I am developing a project in python and need to increase the accuracy. Any ideas?

P.S It's OK if you suggest another python library/technique as long as it is more accurate :)


Solution

  • The problem is not only that the sleep timer of python is inaccurate, but that each part of the loop requires some time.

    Your original code has a run-time of ~1.9528656005859375 on my system.

    If I only run this part of your code without any sleep:

    for i in range(100000):
       dt += datetime.timedelta(microseconds=1)
    

    Then the required time for that loop is already ~0.45999741554260254.

    If I only run

    for i in range(1000000):
       pause.milliseconds(0)
    

    Then the run-time of the code is ~0.5583224296569824.

    Using always the same date:

    dt = datetime.datetime.now()
    for i in range(1000000):
        pause.until(dt)
    

    Results in a runtime of ~1.326077938079834

    If you do the same with the timestamp:

    dt = datetime.datetime.now()
    ts = dt.timestamp()
    for i in range(1000000):
        pause.until(ts)
    

    Then the run-time changes to ~0.36722803115844727

    And if you increment the timestamp with one microsecond:

    dt = datetime.datetime.now()
    ts = dt.timestamp()
    for i in range(1000000):
        ts += 0.000001
        pause.until(ts)
    

    Then you get a runtime of ~0.9536933898925781

    That it is smaller then 1 is due to floating point inaccuracies, adding print(ts-dt.timestamp()) after the loop will show ~0.95367431640625, so the pause duration itself is correct, but the ts += 0.000001 is accumulating an error.

    You will get the best result if you count the iterations you had and add iterationCount/1000000 to the start time:

    dt = datetime.datetime.now()
    ts = dt.timestamp()
    for i in range(1000000):
        pause.until(ts+i/1000000)
    

    And this would result in ~1.000023365020752

    So in my case pause itself would already allow an accuracy with less then 1 microsecond. The problem is actually in the datetime part that is required for both datetime.timedelta and sleep_until.

    So if you want to have microseconds accuracy then you need to look for a time library that performs better then datetime.