When trying to get the HTML content of a website, in this case, www.arrow.com, I get nothing, the web browser keeps waiting forever.
import requests
params = {'q': code}
url = "https://www.arrow.com/en/products/search"
headers = {
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36',
'cache-control': "no-cache",
'postman-token': "564e5d76-282f-98f3-860b-d8e09e2e9073"
}
r = requests.get(url, headers=headers,params=params)
tree = html.fromstring(r.content)
The weird thing is that I can get the right content using Postman and accessing via web browser.
Postman uses this script when using HTTP:
GET /en/products/search?q=cccccccc HTTP/1.1
Host: www.arrow.com
Cache-Control: no-cache
Postman-Token: c3821bb3-767b-b8c7-105a-84fd16291245
or with Python3:
import http.client
conn = http.client.HTTPSConnection("www.arrow.com")
headers = {
'cache-control': "no-cache",
'postman-token': "740c5681-3e67-b605-3040-964be3ea7296"
}
conn.request("GET", "/en/products/search?q=cccccccc", headers=headers)
res = conn.getresponse()
data = res.read()
print(data.decode("utf-8"))
Using the last one, I get also nothing.
Changing the User-Agent
should fix the issue, at least I saw in my case. Your params
are not correct either. Try this to see what happens:
import requests
from lxml.html import fromstring
url = "https://www.arrow.com/en/products/search?"
code = "apple" #any available search terms
r = requests.get(url,
headers={'User-Agent': 'Mozilla/5.0'},
params={'cat':'','q': code,'r': True}
)
tree = fromstring(r.content)
items = tree.cssselect("h1[data-search-term]")[0].text.strip()
print(items) #it should give you the quantity of search result