Search code examples
phphttp-redirectcurlweb-crawlerwget

Curl fails after following 50 redirects but wget works fine


I have an experimental PHP based web crawler and I noticed that it cannot read some pages, for example on some particular domains curl says it failed after following 50 redirects but wget reads that same domain just fine:

curl 'netflix.com' -L -o 'output.txt'

Result:

curl: (47) Maximum (50) redirects followed

No data in output.txt file.

While this command works fine:

wget netflix.com

Any ideas on what can cause this? I doubt that remote server handles requests based on the two different user agents.


Solution

  • This is probably because you didn't tell curl to use cookies, which it doesn't do unless you ask it to - while wget enables them by default.

    Use the --cookie or --cookie-jar options to enable cookies.