I have a list of URLs which is wish to call asynchronously in parrallel in Python Tornado. Currently, this is how I go about it:
response_location = yield dict(origin_maxmind=http_client.fetch(origin_urls['maxmind'], raise_error=False),
origin_ipinfo=http_client.fetch(origin_urls['ipinfo'], raise_error=False),
origin_freegeoip=http_client.fetch(origin_urls['freegeoip'], raise_error=False),
arrival_maxmind=http_client.fetch(arrival_urls['maxmind'], raise_error=False),
arrival_ipinfo=http_client.fetch(arrival_urls['ipinfo'], raise_error=False),
arrival_freegeoip=http_client.fetch(arrival_urls['freegeoip'], raise_error=False))
Further down the road, I may want to add new URLs to call alongside those already there. I think this may be easier if the URLs are in a dict
. Tornado would then asynchronously in parallel call all the URLs in that dict
. Im trying to avoid having to change a lot of things if someone wants to add a new URL to call. How can this be achieved?
If the urls are in one dict, it's straightforward to do this with a dictionary comprehension. To fetch all the urls in origin_urls
, do
response_location = yield {name: http_client.fetch(url, raise_error=False)
for (name, url) in origin_urls.items()}
With two dicts of urls, it's a little clumsier. Here's an equivalent of the code in your question:
response_location = yield dict(
[('origin_' + name, http_client.fetch(url, raise_error=False))
for (name, url) in origin_urls.items()] +
[('arrival_' + name, http_client.fetch(url, raise_error=False))
for (name, url) in arrival_urls.items()])