Search code examples
machine-learningdata-sciencechurn

How to add some global and local explainability to a predictions to understand for which reasons a customer churns?


The main goal is to understand:

  1. How likely is a customer churn?
  2. Identify the churn reason(s) per user.

For now, I'm using a random forest model. I can see the most important features for all users. Is there a way I could get the important feature per user? E.g., maybe a customer is leaving since they don't like the product, and another one is leaving since it's an expensive product, etc.

Thanks in advance!


Solution

  • Individual SHAP values can be used for this kind of local interpretability. SHAP values compute the contribution of each predictor, and can be applied globally, or for particular groups (i.e. customers who leave), or for individual customers.