While training a job on a SageMaker instance using H2o AutoML a message "This H2OFrame is empty" has come up after running the code, what should I do to fix the problem?
/opt/ml/input/config/hyperparameters.json
All Parameters:
{'nfolds': '5', 'training': "{'classification': 'true', 'target': 'y'}", 'max_runtime_secs': '3600'}
/opt/ml/input/config/resourceconfig.json
All Resources:
{'current_host': 'algo-1', 'hosts': ['algo-1'], 'network_interface_name': 'eth0'}
Waiting until DNS resolves: 1
10.0.182.83
Starting up H2O-3
Creating Connection to H2O-3
Attempt 0: H2O-3 not running yet...
Connecting to H2O server at http://127.0.0.1:54321... successful.
-------------------------- ----------------------------------------
-------------------------- ----------------------------------------
Beginning Model Training
Parse progress: |█████████████████████████████████████████████████████████| 100%
Classification - If you want to do a regression instead, set "classification":"false" in "training" params, inhyperparamters.json
Converting specified columns to categorical values:
[]
AutoML progress: |████████████████████████████████████████████████████████| 100%
This H2OFrame is empty.
Exception during training: Argument `model` should be a ModelBase, got NoneType None
Traceback (most recent call last):
File "/opt/program/train", line 138, in _train_model
h2o.save_model(aml.leader, path=model_path)
File "/root/.local/lib/python3.7/site-packages/h2o/h2o.py", line 969, in save_model
assert_is_type(model, ModelBase)
File "/root/.local/lib/python3.7/site-packages/h2o/utils/typechecks.py", line 457, in assert_is_type
skip_frames=skip_frames)
h2o.exceptions.H2OTypeError: Argument `model` should be a ModelBase, got NoneType None
H2O session _sid_8aba closed.
I'm wondering if it's a problem because of the max_runtime_secs, my data has around 500 rows and 250000 columns.
thanks @Marcel Mendes Reis for following up on your solution in the comments. I will repost here for others to easily find:
I realized the issue was due to the max_runtime. When I trained the model with more time I didn't have the problem.