Search code examples
multithreadingbashshellwgetxargs

WGET - Simultaneous connections are SLOW


I use the following command to append the browser's response from list of URLs into an according output file:

wget -i /Applications/MAMP/htdocs/data/urls.txt -O - \
     >> /Applications/MAMP/htdocs/data/export.txt

This works fine and when finished it says:

Total wall clock time: 1h 49m 32s
Downloaded: 9999 files, 3.5M in 0.3s (28.5 MB/s)

In order to speed this up I used:

cat /Applications/MAMP/htdocs/data/urls.txt | \
   tr -d '\r' | \
   xargs -P 10 $(which wget) -i - -O - \
   >> /Applications/MAMP/htdocs/data/export.txt

Which opens simultaneous connections making it a little faster:

Total wall clock time: 1h 40m 10s
Downloaded: 3943 files, 8.5M in 0.3s (28.5 MB/s)

As you can see, it somehow omits more than half of the files and takes approx. the same time to finish. I cannot guess why. What I want to do here is download 10 files at once (parallel processing) using xargs and jump to the next URL when the ‘STDOUT’ is finished. Am I missing something or can this be done elsewise?

On the other hand, can someone tell me what the limit that can be set is regarding the connections? It would really help to know how many connections my processor can handle without slowing down my system too much and even avoid some type of SYSTEM FAILURE.

My API Rate-Limiting is as follows:

Number of requests per minute 100

Number of mapping jobs in a single request 100

Total number of mapping jobs per minute 10,000


Solution

  • A few things:

    • I don't think you need the tr, unless there's something weird about your input file. xargs expects one item per line.
    • man xargs advises you to "Use the -n option with -P; otherwise chances are that only one exec will be done."
    • You are using wget -i - telling wget to read URLs from stdin. But xargs will be supplying the URLs as parameters to wget.
    • To debug, substitute echo for wget and check how it's batching the parameters

    So this should work:

     cat urls.txt | \
     xargs --max-procs=10 --max-args=100 wget --output-document=- 
    

    (I've preferred long params - --max-procs is -P. --max-args is -n)

    See wget download with multiple simultaneous connections for alternative ways of doing the same thing, including GNU parallel and some dedicated multi-threading HTTP clients.

    However, in most circumstances I would not expect parallelising to significantly increase your download rate.

    In a typical use case, the bottleneck is likely to be your network link to the server. During a single-threaded download, you would expect to saturate the slowest link in that route. You may get very slight gains with two threads, because one thread can be downloading while the other is sending requests. But this will be a marginal gain.

    So this approach is only likely to be worthwhile if you're fetching from multiple servers, and the slowest link in the route to some servers is not at the client end.