Search code examples
pythonpandas

Identify duplicated groups in pandas


Assume I have the following data

df = pd.DataFrame({
    'task_id': [1, 1, 1, 1, 2, 2, 2, 2],
    'job_id': [1, 1, 2, 2, 0, 0, 1, 1],
    'filename': ['filename1', 'filename2', 'filename33', 'filename2342', 'filename1', 'filename2', 'filename33', 'filename5']
})
   task_id  jobs_id      filename
0        1        1     filename1
1        1        1     filename2
2        1        2    filename33
3        1        2  filename2342
4        2        0     filename1
5        2        0     filename2
6        2        1    filename33
7        2        1     filename5

I want to identify the pairs (task_id and jobs_id) that contain exactly the same filenames (no more, no fewer filenames). In my case the results should be [((1, 1), (2, 0), )] since they contain exactly (filename1, filename2).

I guess I have to group and then use frozenset, as

df.groupby(['task_id', 'jobs_id'])['filename'].apply(lambda x: frozenset(x))
task_id  jobs_id
1        1              (filename1, filename2)
         2          (filename33, filename2342)
2        0              (filename1, filename2)
         1             (filename33, filename5)
Name: filename, dtype: object

Solution

  • tmp = (df.groupby(['task_id', 'job_id'])['filename']
             .agg(frozenset)[lambda x: x.duplicated(keep=False)]
    )
    
    out = pd.DataFrame(tmp.index).groupby(tmp.values)[0].agg(tuple)
    

    out

    (filename2, filename1)    ((1, 1), (2, 0))
    Name: 0, dtype: object