Search code examples
pandasdataframedictionaryjson-normalize

Pandas Dataframe splitting a column with dict values into columns


I am attempting to split and convert a column, in a pandas dataframe, with list of dictionary values into a new columns. Using Splitting dictionary/list inside a Pandas Column into Separate Columns as a reference things appear to fail because some of the values are NaN. When these rows are encountered an error is thrown, can't iterate over float and if I fillna with None the error changes to a str related error.

I have attempted to first use:

df.explode('freshness_grades')

df_new = pd.concat([df_new.drop('freshness_grades', axis=1), pd.DataFrame(df_new['freshness_grades'].tolist())], axis=1)

I did this to essentially change the list of dictionaries to a dictionary.

    _id  freshness_grades
0   57ea8d0d9c624c035f96f45e    [{'creation_date': '2019-04-20T06:02:02.865000+00:00', 'end_date': '2015-07-23T18:43:00+00:00', 'grade': 'A', 'start_date': '2015-03-05T01:54:47+00:00'}, {'creation_date': '2019-04-20T06:02:02.865000+00:00', 'end_date': '2015-08-22T18:43:00+00:00', 'grade': 'B', 'start_date': '2015-07-23T18:43:00+00:00'}, {'creation_date': '2019-04-20T06:02:02.865000+00:00', 'end_date': '2015-10-21T18:43:00+00:00', 'grade': 'C', 'start_date': '2015-08-22T18:43:00+00:00'}, {'creation_date': '2019-04-20T06:02:02.865000+00:00', 'end_date': '2016-02-02T12:12:00+00:00', 'grade': 'D', 'start_date': '2015-10-21T18:43:00+00:00'}, {'creation_date': '2019-04-20T06:02:02.865000+00:00', 'end_date': '2016-07-22T18:43:00+00:00', 'grade': 'E', 'start_date': '2016-02-02T12:12:00+00:00'}, {'creation_date': '2019-04-20T06:02:02.865000+00:00', 'grade': 'F', 'start_date': '2016-07-22T18:43:00+00:00'}]
1   57ea8d0e9c624c035f96f460    [{'creation_date': '2019-06-25T10:54:40.387000+00:00', 'end_date': '2015-07-20T14:04:00+00:00', 'grade': 'A', 'start_date': '2015-07-14T08:48:49+00:00'}, {'creation_date': '2019-06-25T10:54:40.387000+00:00', 'end_date': '2015-08-19T14:04:00+00:00', 'grade': 'B', 'start_date': '2015-07-20T14:04:00+00:00'}, {'creation_date': '2019-06-25T10:54:40.387000+00:00', 'end_date': '2015-10-18T14:04:00+00:00', 'grade': 'C', 'start_date': '2015-08-19T14:04:00+00:00'}, {'creation_date': '2019-06-25T10:54:40.387000+00:00', 'end_date': '2016-02-02T12:12:00+00:00', 'grade': 'D', 'start_date': '2015-10-18T14:04:00+00:00'}, {'creation_date': '2019-06-25T10:54:40.387000+00:00', 'end_date': '2016-07-19T14:04:00+00:00', 'grade': 'E', 'start_date': '2016-02-02T12:12:00+00:00'}, {'creation_date': '2019-06-25T10:54:40.387000+00:00', 'grade': 'F', 'start_date': '2016-07-19T14:04:00+00:00'}]
2   57ea8d0e9c624c035f96f462    [{'creation_date': '2019-04-20T06:02:03.600000+00:00', 'end_date': '2015-09-29T09:46:00+00:00', 'grade': 'A', 'start_date': '2015-07-27T15:21:32+00:00'}, {'creation_date': '2019-04-20T06:02:03.600000+00:00', 'end_date': '2015-10-29T09:46:00+00:00', 'grade': 'B', 'start_date': '2015-09-29T09:46:00+00:00'}, {'creation_date': '2019-04-20T06:02:03.600000+00:00', 'end_date': '2015-12-04T12:12:00+00:00', 'grade': 'C', 'start_date': '2015-10-29T09:46:00+00:00'}, {'creation_date': '2019-04-20T06:02:03.600000+00:00', 'end_date': '2016-02-02T12:12:00+00:00', 'grade': 'D', 'start_date': '2015-12-04T12:12:00+00:00'}, {'creation_date': '2019-04-20T06:02:03.600000+00:00', 'end_date': '2016-09-28T09:46:00+00:00', 'grade': 'E', 'start_date': '2016-02-02T12:12:00+00:00'}, {'creation_date': '2019-04-20T06:02:03.600000+00:00', 'grade': 'F', 'start_date': '2016-09-28T09:46:00+00:00'}]
3   57ea8d0f9c624c035f96f466    [{'creation_date': '2019-04-20T06:02:04.305000+00:00', 'end_date': '2015-09-29T09:46:00+00:00', 'grade': 'A', 'start_date': '2015-09-09T13:20:14+00:00'}, {'creation_date': '2019-04-20T06:02:04.305000+00:00', 'end_date': '2015-10-29T09:46:00+00:00', 'grade': 'B', 'start_date': '2015-09-29T09:46:00+00:00'}, {'creation_date': '2019-04-20T06:02:04.305000+00:00', 'end_date': '2015-12-04T12:12:00+00:00', 'grade': 'C', 'start_date': '2015-10-29T09:46:00+00:00'}, {'creation_date': '2019-04-20T06:02:04.305000+00:00', 'end_date': '2016-02-02T12:12:00+00:00', 'grade': 'D', 'start_date': '2015-12-04T12:12:00+00:00'}, {'creation_date': '2019-04-20T06:02:04.305000+00:00', 'end_date': '2016-09-28T09:46:00+00:00', 'grade': 'E', 'start_date': '2016-02-02T12:12:00+00:00'}, {'creation_date': '2019-04-20T06:02:04.305000+00:00', 'grade': 'F', 'start_date': '2016-09-28T09:46:00+00:00'}]
4   57ea8d109c624c035f96f468    [{'creation_date': '2019-04-20T06:02:04.673000+00:00', 'end_date': '2015-11-04T12:12:00+00:00', 'grade': 'A', 'start_date': '2015-10-30T07:43:46+00:00'}, {'creation_date': '2019-04-20T06:02:04.673000+00:00', 'end_date': '2015-11-11T12:12:00+00:00', 'grade': 'B', 'start_date': '2015-11-04T12:12:00+00:00'}, {'creation_date': '2019-04-20T06:02:04.673000+00:00', 'end_date': '2015-12-04T12:12:00+00:00', 'grade': 'C', 'start_date': '2015-11-11T12:12:00+00:00'}, {'creation_date': '2019-04-20T06:02:04.673000+00:00', 'end_date': '2016-02-02T12:12:00+00:00', 'grade': 'D', 'start_date': '2015-12-04T12:12:00+00:00'}, {'creation_date': '2019-04-20T06:02:04.673000+00:00', 'end_date': '2016-11-03T12:12:00+00:00', 'grade': 'E', 'start_date': '2016-02-02T12:12:00+00:00'}, {'creation_date': '2019-04-20T06:02:04.673000+00:00', 'grade': 'F', 'start_date': '2016-11-03T12:12:00+00:00'}]
5   5f1eb63dbed8bd4f99e2a280    NaN

Using ehf first row as an example, I'm looking to achieve:

   _id                        creation_date                     end_date                   grade     start_date
0 57ea8d0d9c624c035f96f45e    2019-04-20T06:02:02.865000+00:00  2015-07-23T18:43:00+00:0    A        2015-03-05T01:54:47+00:00
0 57ea8d0d9c624c035f96f45e    2019-04-20T06:02:02.865000+00:00  2015-08-22T18:43:00+00:00   B        2015-07-23T18:43:00+00:00
...

I have begun with explode, and that step works perfectly.

However, I have not attempted with reset_index(). It's the pd.concat() that is failing, and I am thinking it is either related to the NaN or that there are in fact multiple dictionaries in the list. For example, after the explode() i.e. {}, {}, {}


Solution

  • #  explode the list
    df = df.explode('freshness_grades', ignore_index=True)
    
    # now fill the NaN with an empty dict
    df.freshness_grades = df.freshness_grades.fillna({i: {} for i in df.index})
    
    # then normalize the column
    df = df.join(pd.json_normalize(df.pop('freshness_grades')))
    

    Output

                            _id                     creation_date                   end_date grade                 start_date
    0  57ea8d0d9c624c035f96f45e  2019-04-20T06:02:02.865000+00:00  2015-07-23T18:43:00+00:00     A  2015-03-05T01:54:47+00:00
    1  57ea8d0d9c624c035f96f45e  2019-04-20T06:02:02.865000+00:00  2015-08-22T18:43:00+00:00     B  2015-07-23T18:43:00+00:00
    2  57ea8d0d9c624c035f96f45e  2019-04-20T06:02:02.865000+00:00  2015-10-21T18:43:00+00:00     C  2015-08-22T18:43:00+00:00
    3  57ea8d0d9c624c035f96f45e  2019-04-20T06:02:02.865000+00:00  2016-02-02T12:12:00+00:00     D  2015-10-21T18:43:00+00:00
    4  57ea8d0d9c624c035f96f45e  2019-04-20T06:02:02.865000+00:00  2016-07-22T18:43:00+00:00     E  2016-02-02T12:12:00+00:00