The train
function of tf.estimator.Estimator
has the following signature:
train(
input_fn,
hooks=None,
steps=None,
max_steps=None,
saving_listeners=None
)
I'm training a network, where I need to manually set some variables every few steps based on the result of a fairly complicated algorithm, which can't be implemented in the graph. Is it possible to set the value of a variable in a hook? Does anyone know of any example code for this?
To not waste resources, I don't need to call the hook at every training step. Is there a way to specify that my hook should only be called once every N steps? I can of course, keep a counter myself in my hook and just return when my algorithm shouldn't run, but it seems like this should be configurable.
yes that should be possible! I don't know exactly in which scope this variable exists or how you reference it, so I just assume you know it's name. I am basically stealing code from my other answer here.
Simply create a hook before the training loop:
class VariableUpdaterHook(tf.train.SessionRunHook):
def __init__(self, frequency, variable_name):
# variable name should be like: parent/scope/some/path/variable_name:0
self._global_step_tensor = None
self.variable = None
self.frequency = frequency
self.variable_name = variable_name
def after_create_session(self, session, coord):
self.variable = session.graph.get_tensor_by_name(self.variable_name)
def begin(self):
self._global_step_tensor = tf.train.get_global_step()
def after_run(self, run_context, run_values):
global_step = run_context.session.run(self._global_step_tensor)
if global_step % self.frequency == 0:
new_variable_value = complicated_algorithm(...)
assign_op = self.variable.assign(new_variable_value)
run_context.session.run(assign_op)
I don't think it is worth the effort investigating another way to avoid the calls after each iteration as they are very cheap. So the way to go is as you suggested.
Note: I didn't have time to debug this as I currently don't have a use case. But I hope you get the idea.