For a task automatization I needed to write some python code, which makes post request to a web service, then retrieves information etc..
First I tried to use requests.
import requests
def make_invoice_body(terminal, service, code, number):
return {
"terminalId": terminal,
"serviceId" : service,
"invoiceNumber": number,
"invoiceCode": code,
"paymentType": 1,
"requestNumber": random.randint(1000000, 9999999)
}
headers = {'Content-Type': 'application/json'}
body = make_invoice_body(51, "1001000", "FOO", "123456")
requests.post(url, headers=headers, data=body)
Above code piece was recieved error as a response.
Then I tried with urllib2
,
import urllib2
import json
req = urllib2.Request(url)
req.add_header('Content-Type', 'application/json')
resp = urllib2.urlopen(req, body) #body from above
print resp.read()
and it worked.
Now I am curious about differences of above snippets. Shouldn't they do the same task and recieve equivalent answers?
There is no "native dict" format in HTTP world.
The real difference is that requests
is too smart.
When using data=
in requests
, you are trying to post multipart/form-data
. In this case, a dict
will be unzipped into key-value pairs in form-data
format.
When using json=
in requests
, requests
will automatically json.dumps
your dict
into a raw byte string and set Content-Type
for you.
But in urllib
, it does nothing more than you give. You need to manually json.dumps
your dict
. So I believe the example you give is wrong. it should be like the following:
import urllib2
import json
req = urllib2.Request(url)
req.add_header('Content-Type', 'application/json')
resp = urllib2.urlopen(req, json.dumps(body)) #body from above
print resp.read()
Thanks for @t.m.adam's reminding.