The code below will parse JSON from the URL to retrieve 10 urls and put them in an output.txt file.
import json
import urllib.request
response = urllib.request.urlopen('https://json-test.com/test').read()
jsonResponse = json.loads(response)
jsonResponse = json.loads(response.decode('utf-8'))
for child in jsonResponse['results']:
print (child['content'], file=open("C:\\Users\\test\\Desktop\\test\\output.txt", "a"))
Now that there are 10 links to csv files in the output.txt , trying to figure out how I can download and save the 10 files. Tried doing doing something like this but not working.
urllib.request.urlretrieve(['content'], "C:\\Users\\test\\Desktop\\test\\test1.csv")
Even if I get the above working it is just for 1 file, there are 10 file links in the output.txt. Any ideas?
Here is a exhausting guide on how to download files over http.
If the text file contains one link per line, you can iterate through the lines like this:
file = open('path/to/file.ext', 'r')
id = 0
for line in file:
# ... some regex checking if the text is actually a valid url
response = urllib.request.urlretrieve(line, 'path/to/file' + str(id) + '.ext')
id+=1