Search code examples
pythonhttppython-requestskeep-alive

Keep-alive within python-requests module


I've got a question about python-requests module. According to the docs

thanks to urllib3, keep-alive is 100% automatic within a session! Any requests that you make within a session will automatically reuse the appropriate connection!

My sample code looks like this:

def make_double_get_request():
    response = requests.get(url=API_URL, headers=headers, timeout=10)
    print response.text
    response = requests.get(url=API_URL, headers=headers, timeout=10)
    print response.text

But the log I receive tells that with every request new HTTP connection is starting:

INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): url
DEBUG:requests.packages.urllib3.connectionpool:"GET url HTTP/1.1" 200 None
response text goes here
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): url
DEBUG:requests.packages.urllib3.connectionpool:"GET url HTTP/1.1" 200 None
response text goes here

Am I doing something wrong? By looking at the packets with wireshark it seems that they in fact have keep-alive set.


Solution

  • Use a Session() instance:

    def make_double_get_request():
        session = requests.Session()
        response = session.get(url=API_URL, headers=headers, timeout=10)
        print response.text
        response = session.get(url=API_URL, headers=headers, timeout=10)
        print response.text
    

    The requests top-level HTTP method functions are a convenience API that creates a new Session object each time, preventing reuse of connections.

    From the documentation:

    The Session object allows you to persist certain parameters across requests. It also persists cookies across all requests made from the Session instance, and will use urllib3's connection pooling.