Search code examples
pythonjsoncsvpython-requestsget

How to iterate through CSV file URL call PYTHON


I have a file (urls.csv) with 1 Million+ Urls. Each row is a new url like:

  1. https://example.com/1
  2. https://example.com/2 And so on....

I would like to get the json file that is at the end of each of these urls and save it as a separate json file for each url with the file name being in sequential order 1,2,3,n...

Here's what I have so far:

import requests
import csv

url = []

with open('urls.csv') as csvfile:    
    csvReader = csv.reader(csvfile)    
    for row in csvReader:        
        url.append(row[0])

headers = {'Accept': 'application/json'}

response = requests.get(url, headers=headers)

with open('outputfile.json', 'wb') as outf:
    outf.write(response.content)

How should I go about fixing this?


Solution

  • Try this:

    import requests
    import csv
    
    urls = []
    
    with open('urls.csv') as csvfile:    
        csvReader = csv.reader(csvfile)    
        for row in csvReader:        
            urls.append(row[0])
    
    headers = {'Accept': 'application/json'}
    
    for url in urls:
        response = requests.get(url, headers=headers)
        filename = url.split('/')[-1]
        with open(f'{filename}.json', 'wb') as outf:
            outf.write(response.content)
    

    So let's say your 3rd urls is https://example.com/3, the code will save a file named 3.json for the corresponding response.