Search code examples
pythonpython-3.xapitimetrading

My time.time() is bigger than server time


I'm novice to programming and learning python3.
Recently I'm trying to make cryptocurrency trading system using binance's api.
Here's the api document.

The logic and explanation about timestamp in the document is as follows :

Timestamp, to be sent which should be the millisecond timestamp of when the request was created and sent.

if (timestamp < serverTime && (serverTime - timestamp) <= recvWindow)
{ // process request } else { // reject request }

According to this logic, the time I sent the request should be less than the time on the server. The problem is that I have not passed this logic.

When I call time.time() and server time using this code,

import requests
import simplejson as json
import time

base_url = "https://api.binance.com"    
servertime_endpoint="/api/v1/time"
url = base_url + servertime_endpoint

t = time.time()*1000
r = requests.get(url)
result = json.loads(r.content)

print(int(t)-result["serverTime"])

time.time() is bigger than server time so that I get return from last sentence with positive value. What should I do?


Solution

  • This is most likely due to the operating system you are running using a clock with a lower resolution than the one the server is running. When running on a Linux or Mac OS, Python uses a system call for time.time() that returns time down to microsecond resolution (or better). When running on a Windows machine, it only returns time down to millisecond resolution.

    You can check the resolution of the time.time() function by programming a busy loop and waiting until the time changes: use the code in this incredibly useful answer to see what your resolution is.

    If you are running on an OS with a resolution of ~0.001 second (1 millisecond) while the server is reporting times at a resolution of ~0.000001 second (1 microsecond), then even if your clocks were exactly in sync and there is zero network latency, you would still expect your time to be ahead of the server time on 50% of the calls simply due to quantization noise. For instance, if the server reports a time of 12345.678501 (microsecond resolution), your machine would report a time of 12345.679 (millisecond resolution) and appear to be 499 microseconds ahead.

    Some quick solutions are to:

    1. check if the server time rounds to your machine time and call that acceptable even if it appears your time is ahead of the server time;
    2. subtract 500 microseconds to your time to guarantee that quantization noise can't put you ahead of the server;
    3. increase the timing threshold by 500 microseconds and check that the absolute value of the difference between your time and the server time are within the bounds;
    4. run your code on a operating system with a higher resolution system clock.