Search code examples
pythonmachine-learningscikit-learnfeature-selection

How to get features importances with variable labels


I'm training a Decision Tree regressor, but when I get the features importances, only the value comes.

Does anyone know how to get a dataframe with the name of the variables too?

Below is the main part of the code:

num_pipeline = Pipeline([
    ('imputer', SimpleImputer(strategy="median")),
    ('std_scaler', StandardScaler()),
])

cat_pipeline = Pipeline([
    ('imputer', SimpleImputer(strategy="most_frequent")),
    ('oneHot', OneHotEncoder(handle_unknown='ignore')),
])

num_attribs = x_train.select_dtypes(include=np.number).columns.tolist()
cat_attribs = x_train.select_dtypes(include='object').columns.tolist()

full_pipeline = ColumnTransformer([
    ("num", num_pipeline, num_attribs),
    ("cat", cat_pipeline, cat_attribs),
])

train_prepared = full_pipeline.fit_transform(x_train)

param_grid = {'max_leaf_nodes': list(range(2, 100)), 'min_samples_split': [2, 3, 4], 'max_depth': list(range(3, 20))}

dtr = DecisionTreeRegressor()
grid_search = GridSearchCV(dtr, param_grid, cv=5, scoring='neg_mean_squared_error', verbose=1, return_train_score=True, n_jobs=-1)
grid_search = grid_search.fit(train_prepared, y_train)

grid_search.best_estimator_.feature_importances_

Here is the output of feature_importances_:

array([2.59182901e-03, 5.08807106e-04, 1.46808641e-03, 2.20756886e-03,
       1.48878361e-01, 5.65411415e-03, 5.16351699e-03, 9.37444882e-03,
       0.00000000e+00, 7.19228983e-03, 1.00581364e-03, 1.05073934e-03,
       2.63424620e-03, 9.41587243e-03, 7.22742602e-02, 0.00000000e+00,
       2.41075666e-03, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
       0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 1.12861715e-02,
       3.39987538e-03, 5.27924849e-04, 2.20562317e-03, 4.14808367e-03,
       5.82557008e-04, 1.40134963e-03, 0.00000000e+00, 0.00000000e+00,
       1.08351677e-03, 0.00000000e+00, 0.00000000e+00, 1.58022433e-03,
       0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 2.79779634e-02,
       5.94436576e-01, 3.72725666e-02, 1.11665462e-03, 2.39049915e-03,
       0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 1.15314788e-03,
       0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
       0.00000000e+00,...])

Solution

  • While you can't directly call a method to get the labels from the model, they are the same and indexed in the same way as how x_train is, therefore you can obtain the names by using:

    x_train.select_dtypes(include=np.number).columns
    

    Or you can create a dictionary for example:

    feature_importances = {x_train.select_dtypes(include=np.number).columns[x]:grid_search.best_estimator_.feature_importances_[x] for x in range(len(grid_search.best_estimator_.feature_importances_))}