Search code examples
python-3.xtensorflowtensorflow-estimator

Building my own tf.Estimator, how did model_params overwrite model_dir? RuntimeWarning?


Recently I built a customized deep neural net model using TFLearn, which claims to bring deep learning to the scikit-learn estimator API. I could train models and make predictions, but I couldn't get the scoring (evaluate) function to work, so I couldn't do cross-validation. I tried to ask questions about TFLearn in various places, but I got no responses.

It appears that TensorFlow itself has an estimator class. So I am putting TFLearn aside, and I'm trying to follow the guide at https://www.tensorflow.org/extend/estimators. Somehow I'm managing to get variables where they don't belong. Can anyone spot my problem? I will post code and the output.

Note: Of course, I can see the RuntimeWarning at the top of the output. I have found references to this warning online, but so far everyone claims it's harmless. Maybe it is not...

CODE:

import tensorflow as tf
from my_library import Database, l2_angle_distance


def my_model_function(topology, params):

    # This function will eventually be a function factory.  This should
    # allow easy exploration of hyperparameters.  For now, this just
    # returns a single, fixed model_fn.

    def model_fn(features, labels, mode):

        # Input layer
        net = tf.layers.conv1d(features["x"], topology[0], 3, activation=tf.nn.relu)
        net = tf.layers.dropout(net, 0.25)
        # The core of the network is here (convolutional layers only for now).
        for nodes in topology[1:]:
            net = tf.layers.conv1d(net, nodes, 3, activation=tf.nn.relu)
            net = tf.layers.dropout(net, 0.25)
        sh = tf.shape(features["x"])
        net = tf.reshape(net, [sh[0], sh[1], 3, 2])
        predictions = tf.nn.l2_normalize(net, dim=3)

        # PREDICT EstimatorSpec
        if mode == tf.estimator.ModeKeys.PREDICT:
            return tf.estimator.EstimatorSpec(mode=mode,
                    predictions={"vectors": predictions})

        # TRAIN or EVAL EstimatorSpec
        loss = l2_angle_distance(labels, predictions)
        optimizer = tf.train.GradientDescentOptimizer(learning_rate=params["learning_rate"])
        train_op = optimizer.minimize(loss=loss, global_step=tf.train.get_global_step())
        return tf.estimator.EstimatorSpec(mode, predictions, loss, train_op)

    return model_fn

##===================================================================

window = "whole"
encoding = "one_hot"
db = Database("/home/bwllc/Documents/Files for ML/compact")

traindb, testdb = db.train_test_split()
train_features, train_labels = traindb.values(window, encoding)
test_features, test_labels = testdb.values(window, encoding)

# Create the model.
tf.logging.set_verbosity(tf.logging.INFO)
LEARNING_RATE = 0.01
topology = (60,40,20)
model_params = {"learning_rate": LEARNING_RATE}
model_fn = my_model_function(topology, model_params)
model = tf.estimator.Estimator(model_fn, model_params)
print("\nmodel_dir?  No?  Why not? ", model.model_dir, "\n")  # This documents the error

# Input function.
my_input_fn = tf.estimator.inputs.numpy_input_fn({"x" : train_features}, train_labels, shuffle=True)

# Train the model.
model.train(input_fn=my_input_fn, steps=20)

OUTPUT

/usr/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6
  return f(*args, **kwds)
INFO:tensorflow:Using default config.
INFO:tensorflow:Using config: {'_model_dir': {'learning_rate': 0.01}, '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': None, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f0b55279048>, '_task_type': 'worker', '_task_id': 0, '_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}

model_dir?  No?  Why not?  {'learning_rate': 0.01} 

INFO:tensorflow:Create CheckpointSaverHook.
Traceback (most recent call last):
  File "minimal_estimator_bug_example.py", line 81, in <module>
    model.train(input_fn=my_input_fn, steps=20)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/estimator/estimator.py", line 302, in train
    loss = self._train_model(input_fn, hooks, saving_listeners)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/estimator/estimator.py", line 756, in _train_model
    scaffold=estimator_spec.scaffold)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/basic_session_run_hooks.py", line 411, in __init__
    self._save_path = os.path.join(checkpoint_dir, checkpoint_basename)
  File "/usr/lib/python3.6/posixpath.py", line 78, in join
    a = os.fspath(a)
TypeError: expected str, bytes or os.PathLike object, not dict

------------------
(program exited with code: 1)
Press return to continue

I can see exactly what went wrong, model_dir (which I left as the default) somehow bound to the value I intended for model_params. How did this happen in my code? I can't see it.

If anyone has advice or suggestions, I would greatly appreciate them. Thanks!


Solution

  • Simply because you're feeding your model_param as a model_dir when you construct your Estimator.

    From the tensorflow documentation :

    Estimator __init__ function :

    __init__(
        model_fn,
        model_dir=None,
        config=None,
        params=None
    )
    

    Notice how the second argument is the model_dir one. If you want to specify only the params one, you need to pass it as a keyword argument.

    model = tf.estimator.Estimator(model_fn, params=model_params)
    

    Or specify all the previous positional arguments :

    model = tf.estimator.Estimator(model_fn, None, None, model_params)