Search code examples
pythonmachine-learningdatasetdata-sciencefeature-selection

How to identify what features affect predictions result?


I have a table with features that were used to build some model to predict whether user will buy a new insurance or not. In the same table I have probability of belonging to the class 1 (will buy) and class 0 (will not buy) predicted by this model. I don't know what kind of algorithm was used to build this model. I only have its predicted probabilities.

Question: how to identify what features affect these prediction results? Do I need to build correlation matrix or conduct any tests?

Table example:

+---------+-----+-----------+---------+--------+-----------+--------+---------+-------------+------------+
| user_id | age | car_price | car_age | income | education | gender | crashes | probability | true_labes |
+---------+-----+-----------+---------+--------+-----------+--------+---------+-------------+------------+
| 1       | 29  | 15600     | 3       | 20000  | 3         | 1      | 1       | 0.23        | 0          |
+---------+-----+-----------+---------+--------+-----------+--------+---------+-------------+------------+
| 2       | 41  | 43000     | 1       | 65000  | 2         | 0      | 1       | 0.1         | 0          |
+---------+-----+-----------+---------+--------+-----------+--------+---------+-------------+------------+
| 3       | 39  | 23500     | 5       | 43000  | 3         | 1      | 0       | 0.46        | 1          |
+---------+-----+-----------+---------+--------+-----------+--------+---------+-------------+------------+
| 4       | 19  | 12200     | 3       | 13000  | 1         | 1      | 0       | 0.34        | 1          |
+---------+-----+-----------+---------+--------+-----------+--------+---------+-------------+------------+
| 5       | 68  | 21900     | 2       | 31300  | 3         | 0      | 1       | 0.85        | 1          |
+---------+-----+-----------+---------+--------+-----------+--------+---------+-------------+------------+

Solution

  • You could build a model like this.

    x = features you have. y = true_lable

    from that you can extract features importance. also, if you want to go the extra mile,you can do Bootstrapping, so that the features importance would be more stable (statistical).