The main goal is to understand:
For now, I'm using a random forest model. I can see the most important features for all users. Is there a way I could get the important feature per user? E.g., maybe a customer is leaving since they don't like the product, and another one is leaving since it's an expensive product, etc.
Thanks in advance!
Individual SHAP values can be used for this kind of local interpretability. SHAP values compute the contribution of each predictor, and can be applied globally, or for particular groups (i.e. customers who leave), or for individual customers.