I have input as a pandas dataframe new_res
with >6m rows. My objective is to get a count of all unique rows.
start_hex_id_res8 start_hex_id_res9 end_hex_id_res9 end_hex_id_res9 date is_weekday is_holiday starthour
0 882a100d23fffff 892a100d23bffff 892a100d237ffff 892a100d237ffff 2020-07-01 True False 0
1 882a100d23fffff 892a100d23bffff 892a100d237ffff 892a100d237ffff 2020-07-01 True False 0
2 882a1072c7fffff 892a1072c6bffff 892a1072187ffff 892a1072187ffff 2020-07-01 True False 0
3 882a1072c7fffff 892a1072c6bffff 892a1072187ffff 892a1072187ffff 2020-07-01 True False 0
4 882a100d09fffff 892a100d097ffff 892a100d09bffff 892a100d09bffff 2020-07-01 True False 0
start_hex_id_res8 object
start_hex_id_res9 object
end_hex_id_res9 object
end_hex_id_res9 object
date object
is_weekday bool
is_holiday bool
starthour int64
I have tried
agg = new_res.groupby(['start_hex_id_res8', 'start_hex_id_res9', 'end_hex_id_res9', 'end_hex_id_res9', 'date','is_weekday', 'is_holiday', 'starthour']).size().groupby(level=0).size()
but this throws an error:
ValueError: Grouper for 'end_hex_id_res9' not 1-dimensional
How should I interpret this and what would the correct method be in pandas to create a new data frame that is a condensed version of new_res
? The output would simply be a dataframe with the same column names, but with a count of all unique rows (adding a count
column at the end).
Lets try;
g=df.apply(lambda x:x.astype(str))#Make entire dataframe a str
g.groupby(list(g.columns)).ngroup().nunique()#Groupbycolumns, find special groups and see how many are unique