Search code examples
pythongroup-bypandasweighted-average

pandas: groupby and variable weights


I have a dataset with weights for each observation and I want to prepare weighted summaries using groupby but am rusty as to how to best do this. I think it implies a custom aggregation function. My issue is how to properly deal with not item-wise data, but group-wise data. Perhaps it means that it is best to do this in steps rather than in one go.

In pseudo-code, I am looking for

#first, calculate weighted value
for each row:
  weighted jobs = weight * jobs
#then, for each city, sum these weights and divide by the count (sum of weights)
for each city:
  sum(weighted jobs)/sum(weight)

I am not sure how to work the "for each city"-part into a custom aggregate function and get access to group-level summaries.

Mock data:

import pandas as pd
import numpy as np
np.random.seed(43)

## prep mock data
N = 100
industry = ['utilities','sales','real estate','finance']
city = ['sf','san mateo','oakland']
weight = np.random.randint(low=5,high=40,size=N)
jobs = np.random.randint(low=1,high=20,size=N)
ind = np.random.choice(industry, N)
cty = np.random.choice(city, N)
df_city =pd.DataFrame({'industry':ind,'city':cty,'weight':weight,'jobs':jobs})

Solution

  • Simply multiply the two columns:

    In [11]: df_city['weighted_jobs'] = df_city['weight'] * df_city['jobs']
    

    Now you can groupby the city (and take the sum):

    In [12]: df_city_sums = df_city.groupby('city').sum()
    
    In [13]: df_city_sums
    Out[13]: 
               jobs  weight  weighted_jobs
    city                                  
    oakland     362     690           7958
    san mateo   367    1017           9026
    sf          253     638           6209
    
    [3 rows x 3 columns]
    

    Now you can divide the two sums, to get the desired result:

    In [14]: df_city_sums['weighted_jobs'] / df_city_sums['jobs']
    Out[14]: 
    city
    oakland      21.983425
    san mateo    24.594005
    sf           24.541502
    dtype: float64