I am accessing a particular url for a JSON file (fromt the stackexchange and stackoverflow api). While executing the json.loads()
command it shows the following error:
import urllib2
import json
url = "http://api.stackexchange.com/2.1/tags?order=desc&sort=popular&site=quant&pagesize=100&page=1"
data = json.loads(urllib2.urlopen(url).read())
<ipython-input-20-7540e91a8ff2> in <module>()
----> 1 data = json.loads(urllib2.urlopen(url).read())
/usr/lib/python2.7/json/__init__.pyc in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
336 parse_int is None and parse_float is None and
337 parse_constant is None and object_pairs_hook is None and not kw):
--> 338 return _default_decoder.decode(s)
339 if cls is None:
340 cls = JSONDecoder
/usr/lib/python2.7/json/decoder.pyc in decode(self, s, _w)
363
364 """
--> 365 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
366 end = _w(s, end).end()
367 if end != len(s):
/usr/lib/python2.7/json/decoder.pyc in raw_decode(self, s, idx)
381 obj, end = self.scan_once(s, idx)
382 except StopIteration:
--> 383 raise ValueError("No JSON object could be decoded")
384 return obj, end
ValueError: No JSON object could be decoded
On the other hand everything works fine with the twitter api... Why?
The StackExchange API always compresses its responses, but Python doesn't automatically uncompress it, so json is getting gzipped data.
This answer shows how to use the gzip module to handle the response.