API: https://power.larc.nasa.gov/docs/tutorials/service-data-request/api/
This API provides some parameters for a coordinate (latitute and longitute) point. I want to run this api code for all the latitute and longitute in my dataset instead of one latitute and longitute.Latitude and longitude data is in my df dataset like below.
station | long | lat |
---|---|---|
Station 1 | 50.80 | 60.80 |
Station 2 | 45 | 47 |
how do i get json or csv files giving parameters for each latitude and longitude?
A single location point is defined in the api command on the website. I want to get parameters for all location points in my df dataframe. I have 123 location as lat and long. I can't write the location one by one. I need to define this location (in for loop) as lat and lot in df.
I tried this command below but I cannot define locations in the for loop.
lat=df['lat'].values.tolist()
long=df['long'].values.tolist()
import os, json, requests
latitude=lat
longitude =long
output = r"C:/......."
base_url = r"https://power.larc.nasa.gov/api/temporal/daily/point?parameters=T2M,T2MDEW,T2MWET,TS,T2M_RANGE,T2M_MAX,T2M_MIN&community=RE&longitude={longitude}&latitude={latitude}&start=20150101&end=20150305&format=JSON"
for latitude, longitude in locations:
api_request_url = base_url.format(longitude=longitude, latitude=latitude)
response = requests.get(url=api_request_url, verify=True, timeout=30.00)
content = json.loads(response.content.decode('utf-8'))
filename = response.headers['content-disposition'].split('filename=')[1]
filepath = os.path.join(output, filename)
with open(filepath, 'w') as file_object:
json.dump(content, file_object)
We can append the content
s in the for
loop
and then save the whole list of contents
(outside of the for
loop
) as a single JSON
file. Also, the longitude
and latitude
coordinates must be correct (i.e. they can't be outside of the possible range of actual coordinate values) otherwise the request will return <Response [422]>
(which I got from using their example in the documentation...) so if you get that error double-check that the coordinates actually exist (on this planet).
Here I used a small sample dataset of lat
long
coordinates as an example:
import pandas as pd
import os, json, requests
df = pd.DataFrame({'lat': [32.929, 20.029, 42.011], 'long': [5.770, 15.770, 10.970]})
lat=df['lat'].values.tolist()
long=df['long'].values.tolist()
output = "/content/drive/My Drive/Colab Notebooks/DATA_FOLDERS/JSON/"
base_url = r"https://power.larc.nasa.gov/api/temporal/daily/point?parameters=T2M,T2MDEW,T2MWET,TS,T2M_RANGE,T2M_MAX,T2M_MIN&community=RE&longitude={longitude}&latitude={latitude}&start=20150101&end=20150305&format=JSON"
contents = []
for latitude, longitude in zip(lat, long):
api_request_url = base_url.format(longitude=longitude, latitude=latitude)
response = requests.get(url=api_request_url, verify=True, timeout=30.00)
content = json.loads(response.content.decode('utf-8'))
contents.append(content)
filename = response.headers['content-disposition'].split('filename=')[1]
filepath = os.path.join(output, filename)
with open(filepath, 'w') as file_object:
json.dump(contents, file_object)
saved file:
If we want each set of longitude
and latitude
coordinates saved as individual JSON
files, then we can save the content
s inside the for
loop
:
for latitude, longitude in zip(lat, long):
api_request_url = base_url.format(longitude=longitude, latitude=latitude)
response = requests.get(url=api_request_url)
content = json.loads(response.content.decode('utf-8'))
filename = response.headers['content-disposition'].split('filename=')[1]
filepath = os.path.join(output, filename)
with open(filepath, 'w') as file_object:
json.dump(content, file_object)
If we want to save individual files but the filename
s conflict (i.e. files with the same name are being overwritten), then we can append an index
corresponding to the order in which the files are being saved (e.g. '..._1.json'
, '..._2.json'
, ...):
index = 0
for latitude, longitude in zip(lat, long):
index += 1
api_request_url = base_url.format(longitude=longitude, latitude=latitude)
response = requests.get(url=api_request_url)
content = json.loads(response.content.decode('utf-8'))
filename = response.headers['content-disposition'].split('filename=')[1]
filename = filename[:filename.find('.json')] + '_' + str(index) + '.json'
with open(filepath, 'w') as file_object:
json.dump(content, file_object)