Search code examples
pythonscikit-learnpytorchneuraxle

n_jobs > 1 using sklearn and pytorch is possible inside Neuraxle?


I built my own sklearn-like estimator using pytorch training inside GPU (cuda) and it works fine with RandomizedSearchCV when n_jobs==1. When n_jobs > 1, I get the following error:

PicklingError: Can't pickle <class 'main.LSTM'>: attribute lookup LSTM on main failed

This is the piece of code giving me the error:

model = my_model(input_size=1, hidden_layer_size=80, n_lstm_units=3, bidirectional=False,
                 output_size=1, training_batch_size=60, epochs=7500, device=device)
model.to(device)

hidden_layer_size = random.uniform(40, 200, 20).astype("int")
n_lstm_units = arange(1, 4)

parametros = {'hidden_layer_size': hidden_layer_size, 'n_lstm_units': n_lstm_units}

splitter = ShuffleSplit()

regressor = model
cv_search = \
    RandomizedSearchCV(estimator=regressor, cv=splitter,
                  search_spaces=parametros,
                  refit=True,
                  n_iter=4,
                  verbose=1,
                  n_jobs=2,
                  scoring=make_scorer(mean_squared_error,
                                      greater_is_better=False,
                                      needs_proba=False))

cv_search = MetaSKLearnWrapper(cv_search)
cv_search.fit(X, y)

Using Neuraxle wrapper leads to exactly same error, changes nothing.

I found closest solution here, but still don't know how to use RandomizedSearchCV within Neuraxle. It is a brand new project, so I couldn't find an answer on their docs or community examples. If anyone can give me an example or a good indication it will save my life. Thank you

Ps: Any way to run RandomizedSearchCV with my pytorch model on the gpu without Neuraxle also helps, I just need n_jobs>1.

Ps2: My model has a fit() method that creates and moves tensors to the gpu and works already tested.


Solution

  • There are multiple criteria that must be respected here for your code to work:

    1. You need to use Neuraxle's RandomSearch instead of sklearn's random search for this to work. Use Neuraxle's base classes when possible.
    2. Make sure that you use a Neuraxle BaseStep for your pytorch model, instead of a sklearn base classe.
    3. Also, you should create your PyTorch code only in the setup() method or later. You can't create a PyTorch model in the __init__ of the BaseStep that contains pytorch code. You will want to read this page.
    4. You will probably have to create a Saver for your BaseStep that contains PyTorch code if you want to serialize and then load your trained pipeline again. You can see how we created our TensorFlow Saver for our TensorFlow BaseStep and do something similar. Your saver will probably be much simpler than ours due to the more eager nature of PyTorch. For instance, you could have self.model inside your extension of the BaseStep class. The role of the saver would be to save and strip away this simple variable from self, and to be able to reload it whenever needed.

    To sum up: you'd need to create two classes, and your two classes should look very similar our two TensorFlow step and saver classes here, to the exception that you PyTorch model is in a self.model variable of your step.

    I'd be glad to see your implementation of your PyTorch base step and of your PyTorch saver!

    You could then also even use the AutoML class (see AutoML example here) to save experiments in a Hyperparameter Repository as seen in the example.