I'm using httplib to grab bunch of resources from a website and i want it at minimum cost, so i set 'Connection: keep-alive' HTTP header on my requests but i'm not sure it actually uses the same TCP connection for as many requests as the webserver allows.
i = 0
while 1:
i += 1
print i
con = httplib.HTTPConnection("myweb.com")
con.request("GET", "/x.css", headers={"Connection":" keep-alive"})
result = con.getresponse()
print result.reason, result.getheaders()
Is my implementation right? does keep-alive work? Should i put 'con = httplib.HTTPConnection("myweb.com")' out of the loop?
P.S: the web server's response to keep-alive is ok, i'm aware of urllib3
your example creates a new TCP connection each time through the loops, so no, it will not reuse that connection.
How about this?
con = httplib.HTTPConnection("myweb.com")
while True:
con.request("GET", "/x.css", headers={"Connection":" keep-alive"})
result = con.getresponse()
result.read()
print result.reason, result.getheaders()
also, if all you want is headers, you can use the HTTP HEAD method, rather than calling GET and discarding the content.