Search code examples
tensorflowtensorboard

TensorFlow: How can I summary two objects networks for TensorBoard?


I have a class which has method to create network.

class DQN:
    def __init__(self, session, input_size, output_size, name):
        .
        .
        .
        self._build_network()

    def _build_network(self, h_size=16, l_rate=0.01):
        with tf.variable_scope(self.net_name):
            self._X = tf.placeholder(tf.float32, [None, self.input_size], name="input_x")
            net = self._X
            net = tf.layers.dense(net, h_size, activation=lambda x: tf.maximum(0.3*x,x))
            net = tf.layers.dense(net, self.output_size) 
            self._Qpred = net

        self._Y = tf.placeholder(shape=[None, self.output_size], dtype=tf.float32)

        # Loss function
        with tf.name_scope("loss") as scope:
            self._loss = tf.reduce_mean(tf.square(self._Y - self._Qpred))
            self._loss_summary = tf.summary.scalar("loss", self._loss)

        # Learning
        self._train = tf.train.AdamOptimizer(learning_rate=l_rate).minimize(self._loss)

    def update(self, x_stack, y_stack, merged_summary):
        return self.session.run(
            [self._loss, self._train, merged_summary],
            feed_dict={
                self._X: x_stack,
                self._Y: y_stack,
            }
        )

And have to create two DQNinstances (separate networks).

def main():
    with tf.Session() as sess:
        mainDQN = dqn.DQN(sess, input_size, output_size, name="main")
        targetDQN = dqn.DQN(sess, input_size, output_size, name="target")

        merged_summary = tf.summary.merge_all()
        writer = tf.summary.FileWriter("./logs/dqn_log")
        writer.add_graph(sess.graph) 
        .
        .
        .
        loss, _, summary = mainDQN.update(x_stack, y_stack, merged_summary)
        writer.add_summary(summary, global_step=episode)

What I want to do is to keep track of mainDQN's loss function. But with above codes, it occurs an error when calling update() . :

tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'target/input_x' with dtype float
     [[Node: target/input_x = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

I think this error is related with targetDQN...

But have no idea how to deal with it.

Need your advices, thanks.


Solution

  • You're right, the issue is related to targetDQN object. Basically what happens is that your merged_summary is an op which relies both on your main loss and on your target loss. Therefore when you ask its evaluation it will require the inputs for both DQN.

    I would suggest to refactor your update function this way:

    def update(self, x_stack, y_stack):
        return self.session.run(
            [self._loss, self._train, self._loss_summary],
            feed_dict={
                self._X: x_stack,
                self._Y: y_stack,
            }
        )
    

    so you only ask for the evaluation of the right summary.

    EDIT: If you want more summaries associated to one of your DQN object, you could merge them using the tf.summary.merge method (see the API documentation) and ask for its evaluation.