Search code examples
pythonscikit-learnk-fold

How to get k-fold cross validation final model with sklearn


Once I iterated on each training combination, given the k-fold split, I can estimate mean and standard deviation of models performance but I actually get k different models (with their own fitted parameters). How do I get the final, whole model? Is a matter of averaging the parameters?

Not showing code because is a general question so I'll write down the logic only:

  1. dataset
  2. splitting dataset according to the k-fold theory (let's say k = 5)
  3. iterations: training from the first to the fifth model
  4. getting 5 different models with, let's say, the following parameters:
   model_1 = [p10, p11, p12] \
   model_2 = [p20, p21, p22]  |
   model_3 = [p30, p31, p32]   > param_matrix 
   model_4 = [p40, p41, p42]  |
   model_5 = [p50, p51, p52] /

What about model_final: [pf0, pf1, pf2]?

Too trivial solution 1: model_final = mean(param_matrix, axis=0)

Too trivial solution 2: model_final = the one of the fives that reach the highest performance (could be a overfit rather than the optimal one)


Solution

  • First of all, the purpose of cross-validation (K-fold) is model checking, not model building.

    In your question, you said that every fold of your program has different parameters, maybe this is not the best way to work.

    One possibility to proceed is evaluate every model (each one with different parameters) using K-fold inside (using GridSearchCV); if you see that you obtain similar values of accuracy or other metrics in each split, then you are not overfitting. Make this methodology for every model you have, and chose the one you obtain better results. Of course, always there is possibility to overfit, but with K-fold, you reduce it.

    Finally, once you have checked with cross-validation that you obtain similar metrics for every split and you have chosed the model parameters, you have to train your model with all your training data; and you will finally obtain one unique model.