I'm working with JSON data from an API response. I'm using python to send a GET request with data on an address. I want to capture the JSON response and convert to a dataframe.
Currently I capture the responses in a list of dictionaries. I can successfully use json_normalize on r.json or sample_list[0], but cannot normalize the entire list. I'm trying to avoid creating and appending the df in the loop for performance.
sample_list = []
for index, row in sample_df.iterrows():
sample_address = json.dumps(
{
"records": [
{
"attributes": {
"OBJECTID": row['OBJECTID'],
"Address": row['Address'],
"City": row['City'],
"Region": row['Region'],
"Postal": row['Postal']
}
}
]
}
)
r = re.get(url, params = { 'addresses': sample_address, 'f':'pjson'},verify = False)
sample_list.append(r.json()['locations'])
###The Output of r.json for one address
{'locations': [{'address': '2600 Benjamin Franklin Pkwy, Philadelphia, Pennsylvania, 19130',..., 'score': 100}], 'spatialReference': {'latestWkid': 4326, 'wkid': 4326}}
###The sample_list of multiple r.json output
[[{'address': '520 Chestnut St, Philadelphia, Pennsylvania, 19106',
'attributes': {'AddNum': '520',
...},
'location': {'x': -75.14971142634045, 'y': 39.94905972672609},
'score': 100}],
[{'address': '2600 Benjamin Franklin Pkwy, Philadelphia, Pennsylvania, 19130',
'attributes': {'AddNum': '2600',
...,
'location': {'x': -75.17923104567541, 'y': 39.96474536190999},
'score': 100}]]
In sample_list.append(r.json()['locations']) just try 'json_normalize' instead of 'r.json'.