Search code examples
pythonpandasdataframegroup-byapply

Pandas :How to improve performance, comparing rows inside groups


I have done a python program to compare rows inside groups.But the performances are poor. The data are coming from a Change Data Capture system. For every change, there is a Sequence id , and an Operation number. For an Update operation, there is two rows: One with Operation=3 (previous value ) and one with Operation=4 (new value). The columns with no changes are set to null but a value can change from "Somevalue" to NULL so i need to compare row 3 and 4 to know when it's a Null because the value is really Null or because there is no change.

This is an example of the source data :

Source data

This is the output required :

Desired outcome

Bellow my code with the same mockup data :

import pandas as pd
import numpy as np
d={'_Change-Sequence':[1,1,2,2,3,3],
   '_Operation':[3,4,3,4,3,4],
   'Dossier_x':[1,1,2,2,3,3],
   'IsCovidPositiv':['Yes','No','No',np.NaN,'Yes','Yes'],
   'Status':[np.NaN,'KO',np.NaN,np.NaN,np.NaN,np.NaN]
  }
df_update=pd.DataFrame(data=d)
print(df_update)
for column in [column for column in df_update.columns if column not in {'index','Dossier_x'} if not column.startswith('_')]:
  column_previous_name=column+"_Previous|"
  df_update[column_previous_name]=df_update.groupby('_Change-Sequence')[column].shift()
  df_update[column]=df_update.apply(lambda x:x[column] if x[column_previous_name]!=x[column]  else np.nan,axis=1)
  df_update.drop(column_previous_name,axis=1,inplace=True)
df_update=df_update[df_update['_Operation']==4]

df_update

Online version of the code

The output is as required. Only one line per group ( Same Change Sequence ) with the the value for each non meta or PK column ( column starting with "_" or index and "Dossier_x") if it changed and NaN if it didn't change. I need to do so for every columns ( i don't know the name of the columns in advance )

Regards

Vincent

The program is working ( in the question) but the performance are bad.


Solution

  • If I understood correctly your logic, you could simplify your code to:

    cols = [column for column in df_update.columns if column not in {'index','Dossier_x'}
            if not column.startswith('_')]
    
    # get shifted values
    tmp = df_update.groupby('_Change-Sequence')[cols].shift()
    
    # mask equal values and slice
    out = df_update.mask(df_update.eq(tmp, axis=0)).loc[df_update['_Operation'].eq(4)]
    

    Output:

       _Change-Sequence  _Operation  Dossier_x IsCovidPositiv Status
    1                 1           4          1             No     KO
    3                 2           4          2            NaN    NaN
    5                 3           4          3            NaN    NaN