I'm getting confused when it comes to model validation.
What I've done for 6 different algorithms:
-->separated my dataset 75/25 (training/test) --> the test I left untouched.
-->with the training set I did the following:
Now this is the problem:
I still have an untouched test set (from the split in the beginning), what should I do with it? Apply directly to the best model and see the performance? or retrain the best model with the best parameters using the whole training set and then apply the test set?
Or is everything wrong here?
You got it. This is the general rule: