I am new to Python, and for one of my project, I need to convert csv to nested Json. Searching on net, I found pandas
are helpful in this case.
I followed the appraoch given in Convert CSV Data to Nested JSON in Python
But I am getting a keyError exception KeyError: 'state'
df info
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4 entries, 0 to 3
Data columns (total 3 columns):
country 4 non-null object
state 4 non-null object
city 4 non-null object
dtypes: object(3)
memory usage: 176.0+ bytes
None
Traceback (most recent call last):
File "csvToJson.py", line 31, in <module>
grouped = df.groupby(['country', 'state'])
File "/home/simarpreet/Envs/j/lib/python3.7/site-packages/pandas/core/generic.py", line 7632, in groupby
observed=observed, **kwargs)
File "/home/simarpreet/Envs/j/lib/python3.7/site-packages/pandas/core/groupby/groupby.py", line 2110, in groupby
return klass(obj, by, **kwds)
File "/home/simarpreet/Envs/j/lib/python3.7/site-packages/pandas/core/groupby/groupby.py", line 360, in __init__
mutated=self.mutated)
File "/home/simarpreet/Envs/j/lib/python3.7/site-packages/pandas/core/groupby/grouper.py", line 578, in _get_grouper
raise KeyError(gpr)
KeyError: 'state'
Input csv:
country, state, city
India, Delhi, Tilak nagar
India, Mumbai, Bandra
Australia, Queensland, Gold Coast
US, California, Los Angeles
My Code:
csvFilePath = "/home/simarpreet/sampleCsv.csv"
jsonFilePath = "/home/simarpreet/sampleJson.json"
jsonFile = open(jsonFilePath, 'w')
df = pd.read_csv(csvFilePath, encoding='utf-8-sig')
print("df info")
print(df.info())
finalList = []
grouped = df.groupby(['country', 'state'])
for key, value in grouped:
dictionary = {}
j = grouped.get_group(key).reset_index(drop=True)
dictionary['country'] = j.at[0, 'country']
dictionary['state'] = j.at[0, 'state']
dictList = []
anotherDict = {}
for i in j.index:
anotherDict['city'] = j.at[i, 'city']
dictList.append(anotherDict)
dictionary['children'] = dictList
finalList.append(dictionary)
json.dumps(finalList)
The problem is with your csv file, there are leading whistespaces in column names, because of which the key error is coming.
As pointed out by @cs95 you can do
df.columns = df.columns.str.strip()
Or you can use read_csv to handle spaces:
pd.read_csv(csvFilePath, encoding='utf-8-sig', sep='\s*,\s*', engine='python')
PS : Bad way to deal with it :
grouped = df.groupby(['country', ' state'])