Search code examples
pythonpandasdataframepysparkluigi

Best way to optimize a complex loop that iterates a dataframe


I have a couple of methods here that are taking longer than I would like to. I'm currently hitting a wall since I don't see any obvious way to write these methods in a more efficient way.

For background, what the code is doing is processing a sales dataset, in order to find previous sales orders related to the same client. However, as you will see, there's a lot of business logic in the middle which is probably slowing things down.

I was thinking about refactoring this into a PySpark job but before I do so, I would like to know if that's even the best way to get this done.

I will highly appreciate any suggestions here.

More context: Each loop is taking about 10 minutes to complete. There are about 24k rows in search_keys. These methods are part of a Luigi task.

def previous_commits(self, df: pd.DataFrame):
# Build some filters to slice data:
search_keys = df.loc[:, ['accountid', 'opportunityid']].drop_duplicates()
cols_a = ['opportunityid', 'opportunity_type', 'platform', 'closedate']
cols_b = ['opportunityid', 'productid']

# Build a list with the previous commit oppy_id:
commits = [
    {
        'acc_id': acc,
        'current_oppy': oppy,
        'previous_commit': self.fetch_latest_commit(oppy, df.loc[df.accountid == acc, cols_a].drop_duplicates())
    }
    for oppy, acc in tqdm(
        zip(search_keys.opportunityid, search_keys.accountid),
        desc='Finding previous commits data',
        file=sys.stdout,
        total=search_keys.shape[0]
    )
]

# Fetch products for the previous commit as well as the current row oppy:
products = [
    {
        'current_oppy': x.get('current_oppy'),
        'current_products': self.fetch_products_id(
            [x.get('current_oppy')],
            df.loc[df.accountid == x.get('acc_id'), cols_b].drop_duplicates()
        ),
        'previous_products': self.fetch_products_id(
            x.get('previous_commit'),
            df.loc[df.accountid == x.get('acc_id'), cols_b].drop_duplicates()
        ),
        'previous_recurrent_products': self.fetch_products_id(
            x.get('previous_commit'),
            df.loc[(df.accountid == x.get('acc_id')) & (df.fee_type == 'Recurring'), cols_b].drop_duplicates()
        )
    }
    for x in tqdm(
        commits,
        desc='Finding previous commit products',
        file=sys.stdout
    )
]

# Pick new calculated column and change its name for compatibility:
df = pd.DataFrame(commits).join(pd.DataFrame(products).set_index('current_oppy'), on='current_oppy')
df = df.loc[:, ['current_oppy', 'previous_commit', 'current_products', 'previous_recurrent_products']]
df.columns = ['current_oppy', 'previous_commit', 'current_products', 'previous_products']
return df

@staticmethod
def fetch_latest_commit(oppy_id: str, data: pd.DataFrame):
# Build some filters and create a df copy to search against:
data = data.set_index('opportunityid')
current_closedate = data.loc[data.index == oppy_id, ['closedate']].iat[0, 0]
current_platform = data.loc[data.index == oppy_id, ['platform']].iat[0, 0]
date_filter = data.closedate < current_closedate
platform_filter = data.platform == current_platform
eb_filter = data.opportunity_type != 'EB'
subset = data.loc[date_filter & eb_filter, :].sort_values('closedate', ascending=False)

if current_platform in {'CL', 'WE'}:
    # Fetch latest commit closedate for the same platform:
    subset = data.loc[date_filter & platform_filter & eb_filter, :].sort_values('closedate', ascending=False)
    latest_commit_date = subset.loc[:, 'closedate'].max()
    latest_commit_filter = subset.closedate == latest_commit_date
else:
    # Fetch latest commit closedate:
    latest_commit_date = subset.loc[:, 'closedate'].max()
    latest_commit_filter = subset.closedate == latest_commit_date

# Now try to get the latest commit oppy_id (if available), otherwise, just exit the function
# and return the current oppy_id. If the latest commit is a NB or NBU
# deal, then make another lookup to ensure that all the NB info is gathered since they might
# have different closedates.

try:
    latest_commit_id = list(subset.loc[latest_commit_filter, :].index)
    latest_commitid_filter = subset.index.isin(latest_commit_id)
    latest_commit_type = subset.loc[latest_commitid_filter, 'opportunity_type'].unique()[0]
except IndexError:
    return {oppy_id}

if latest_commit_type == 'RN':
    return set(latest_commit_id)
else:
    try:
        nb_before_latest_commit_filter = subset.closedate < latest_commit_date
        nb_only_filter = subset.opportunity_type == 'NB'
        nb_commit_id = list(subset.loc[nb_only_filter & nb_before_latest_commit_filter, :].index)
        return set(latest_commit_id + nb_commit_id)
    except IndexError:
        return set(latest_commit_id)

@staticmethod
def fetch_products_id(oppy_ids: list, data: pd.DataFrame):
    data = data.set_index('opportunityid')
    return set(data.loc[data.index.isin(oppy_ids), 'productid'])

Solution

  • "In very simple words Pandas run operations on a single machine whereas PySpark runs on multiple machines. If you are working on a Machine Learning application where you are dealing with larger datasets, PySpark is a best fit which could processes operations many times(100x) faster than Pandas."

    from https://sparkbyexamples.com/pyspark/pandas-vs-pyspark-dataframe-with-examples/

    You should also think of a Windows function approach to get the previous order. That will avoid a loop on all records.