I am trying to figure out the difference between the random state in train test split and in the MLP regressor. If I change it in the MLP regressor all the trials I run are very good. However, if I change it in the train test split I have a wide range of results. I read that both are the random seed but I don't understand how they affect so differently to my MLP depending on where I change it.
Thank you for the help!
I assume you have some code like the one from the scikit-example here:
X_train, X_test, y_train, y_test = train_test_split(X, y,random_state=1)
regr = MLPRegressor(random_state=1, max_iter=500).fit(X_train, y_train)
And first you changed the random state in train_test_split and then you changed it in MLPRegressor and compared it.
When changing the random_state in train_test_split method, it will shuffle your data a bit differently according to your random_state than before, so your train and test data looks different. (Documentation)
When changing the random_state of the MLPRegressor, it not only uses that seed to shuffle your data in the train_test_split method, but also to generate the weights, initialize the bias and determine batch sampling. (Documentation)
So changing the random state in MLPRegressor entails more changes than just changing the train_test_split random state. Hope I understood your question correctly and could help you.