Search code examples
javarh2oensemble-learning

Stacking of different models (including rf, glm) in h2o (R)


I have a question regarding h2o.stackedEnsemble in R. When I try to create an ensemble from GLM models (or any other models and GLM) I get the following error:

DistributedException from localhost/127.0.0.1:54321: 'null', caused by java.lang.NullPointerException

DistributedException from localhost/127.0.0.1:54321: 'null', caused by java.lang.NullPointerException
    at water.MRTask.getResult(MRTask.java:478)
    at water.MRTask.getResult(MRTask.java:486)
    at water.MRTask.doAll(MRTask.java:390)
    at water.MRTask.doAll(MRTask.java:396)
    at hex.StackedEnsembleModel.predictScoreImpl(StackedEnsembleModel.java:123)
    at hex.StackedEnsembleModel.doScoreMetricsOneFrame(StackedEnsembleModel.java:194)
    at hex.StackedEnsembleModel.doScoreOrCopyMetrics(StackedEnsembleModel.java:206)
    at hex.ensemble.StackedEnsemble$StackedEnsembleDriver.computeMetaLearner(StackedEnsemble.java:302)
    at hex.ensemble.StackedEnsemble$StackedEnsembleDriver.computeImpl(StackedEnsemble.java:231)
    at hex.ModelBuilder$Driver.compute2(ModelBuilder.java:206)
    at water.H2O$H2OCountedCompleter.compute(H2O.java:1263)
    at jsr166y.CountedCompleter.exec(CountedCompleter.java:468)
    at jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:263)
    at jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:974)
    at jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1477)
    at jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104)
Caused by: java.lang.NullPointerException

Error: DistributedException from localhost/127.0.0.1:54321: 'null', caused by java.lang.NullPointerException

The error does not occur when I stack any other models, only appears with the GLM. Of course I use the same folds for cross-validation.

Some sample code for training the model and the ensemble:

glm_grid <- h2o.grid(algorithm = "glm",
                     family = 'binomial',
                     grid_id = "glm_grid",
                     x = predictors,
                     y = response,
                     seed = 1,
                     fold_column = "fold_assignment",
                     training_frame = train_h2o,
                     keep_cross_validation_predictions = TRUE,
                     hyper_params = list(alpha = seq(0, 1, 0.05)),
                     lambda_search = TRUE,
                     search_criteria = search_criteria,
                     balance_classes = TRUE,
                     early_stopping = TRUE)

glm <- h2o.getGrid("glm_grid",
                  sort_by="auc",
                  decreasing=TRUE)

ensemble <- h2o.stackedEnsemble(x = predictors,
                                y = response,
                                training_frame = train_h2o,
                                model_id = "ens_1",
                                base_models = glm@model_ids[1:5])

Solution

  • This is a bug, and you can track the progress of the fix here (this should be fixed in the next release, but it might be fixed sooner and available on the nightly releases).

    I was going to suggest training the GLMs in a loop or apply function (instead of using h2o.grid()) as a temporary work-around, but unfortunately, the same error happens.