Search code examples
pythonpandasdataframetimeline

How calculate pearson correlation between two pandas timeline vectors


I have a database of users posts in a social network, Using Pandas DataFrame I counted the monthly number of posts of each user, which resulted in a 2 column table for each user containing the month and number of posts. I want to calculate the monthly count correlation between different users, knowing that the monthly timelines are differents between each two users ( with some intersected months)

This is the code for creating a monthly timeline table (agg)

# Create an empty dataframe
df = pd.DataFrame()
# Create a column from the datetime variable
df['datetime'] = date_list
# Convert that column into a datetime datatype
df['datetime'] = pd.to_datetime(df['datetime'])
# Set the datetime column as the index
df['score'] = count
df.index = df['datetime'] 
# this is the table containing posts count for each month
agg = df['score'].resample('M').sum().to_frame()

So basically I have to apply a correlation function on two "agg" variables, but couldn't find an intuitive way to do it. Here is two examples of agg variable belonging to two different users:

First column : Month ,Second column : Number of posts.

User A 
2018-04-30     39
2018-05-31     41
2018-06-30     19
2018-07-31     46
2018-08-31     61
2018-09-30     57
2018-10-31     33
2018-11-30     18

User B:
2017-11-30      0
2017-12-31      3
2018-01-31      0
2018-02-28      0
2018-03-31      22
2018-04-30      3
2018-05-31      11

Solution

  • here is a solution to calculus the pearson correlation:

    import pandas as pd
    data = """    
    datetime     score 
    2018-04-30     39
    2018-05-31     41
    2018-06-30     19
    2018-07-31     46
    2018-08-31     61
    2018-09-30     57
    2018-10-31     33
    2018-11-30     18
        """
        datb = """    
    datetime      score 
    2017-11-30      0
    2017-12-31      3
    2018-01-31      0
    2018-02-28      0
    2018-03-31      22
    2018-04-30      3
    2018-05-31      11
            """
    dfa = pd.read_csv(pd.compat.StringIO(data), sep='\s+')
    dfb = pd.read_csv(pd.compat.StringIO(datb), sep='\s+')
    dfa['datetime'] = pd.to_datetime(dfa['datetime'])
    dfb['datetime'] = pd.to_datetime(dfb['datetime'])
    dfa.index = dfa['datetime']
    dfb.index = dfb['datetime']
    
    agga = dfa['score'].resample('M').sum().to_frame()
    aggb = dfb['score'].resample('M').sum().to_frame()
    print(agga,aggb)
    

    #intersection of 2 dataframes on datetime
    inter = agga.merge(aggb, on='datetime')
    print(inter)
    result = inter['score_x'].corr(inter['score_y'])
    print(result)
    

     dfa
               score
    datetime         
    2018-04-30     39
    2018-05-31     41
    2018-06-30     19
    2018-07-31     46
    2018-08-31     61
    2018-09-30     57
    2018-10-31     33
    2018-11-30     18
    
     dfb
                 score
    datetime         
    2017-11-30      0
    2017-12-31      3
    2018-01-31      0
    2018-02-28      0
    2018-03-31     22
    2018-04-30      3
    2018-05-31     11
    
     inter
                score_x  score_y
    datetime                    
    2018-04-30       39        3
    2018-05-31       41       11
    
     result
    0.9999999999999999
    

    If you want to use union:

    union = pd.merge(agga, aggb, on='datetime', how='outer').fillna(0)
    

    output of union:

                    score_x  score_y
    datetime                    
    2018-04-30     39.0      3.0
    2018-05-31     41.0     11.0
    2018-06-30     19.0      0.0
    2018-07-31     46.0      0.0
    2018-08-31     61.0      0.0
    2018-09-30     57.0      0.0
    2018-10-31     33.0      0.0
    2018-11-30     18.0      0.0
    2017-11-30      0.0      0.0
    2017-12-31      0.0      3.0
    2018-01-31      0.0      0.0
    2018-02-28      0.0      0.0
    2018-03-31      0.0     22.0
    

    a good link to understant merge