Search code examples
pythontensorflowlogistic-regressiontensorboardmnist

How to write summary log using tensorflow for logistic regression on MNIST data?


I am new with tensorflow and implementation of tensorboard. This is my very first experience to implement logistic regression on MNIST data using tensorflow. I have successfully implemented logistic regression on data and now I am trying to log summary to log file using tf.summary .fileWriter.

Here is my code which affects the summary parameter

x = tf.placeholder(dtype=tf.float32, shape=(None, 784))
y = tf.placeholder(dtype=tf.float32, shape=(None, 10)) 

loss_op = tf.losses.mean_squared_error(y, pred)
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
accuracy_op = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

tf.summary.scalar("loss", loss_op)
tf.summary.scalar("training_accuracy", accuracy_op)
summary_op = tf.summary.merge_all()

And this is how I am training my model

with tf.Session() as sess:   
    sess.run(init)
    writer = tf.summary.FileWriter('./graphs', sess.graph)

    for iter in range(50):
        batch_x, batch_y = mnist.train.next_batch(batch_size)
        _, loss, tr_acc,summary = sess.run([optimizer_op, loss_op, accuracy_op, summary_op], feed_dict={x: batch_x, y: batch_y})
        summary = sess.run(summary_op, feed_dict={x: batch_x, y: batch_y})
        writer.add_summary(summary, iter)

After adding the summary line to get merged summary, I am getting below error


InvalidArgumentError (see above for traceback): 
You must feed a value for placeholder tensor 'Placeholder_37' 
with dtype float and shape [?,10]

This error points to the declaration of Y

y = tf.placeholder(dtype=tf.float32, shape=(None, 10)) 

Can you please help me what I am doing wrong?


Solution

  • From the error message it looks like you are running your code in some kind of jupyter environment. Try restarting the kernel/runtime and run everything again. Running the code twice in graph mode does not work in jupyter well. If I run my code, below, first time it does not return any errors, when I run it second time (w/o restarting kernel/runtime) then it crashes the same way as yours does.

    I was too lazy to check it on actual model so my pred=y. ;) But the code below does not crash, so you should be able to adapt it to your needs. I've tested it in Google Colab.

    import tensorflow as tf
    from tensorflow.examples.tutorials.mnist import input_data
    mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
    
    x = tf.placeholder(dtype=tf.float32, shape=(None, 784), name='x-input')
    y = tf.placeholder(dtype=tf.float32, shape=(None, 10), name='y-input')
    
    pred = y
    loss_op = tf.losses.mean_squared_error(y, pred)
    correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
    accuracy_op = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    
    with tf.name_scope('summaries'):
      tf.summary.scalar("loss", loss_op, collections=["train_summary"])
      tf.summary.scalar("training_accuracy", accuracy_op, collections=["train_summary"])
    
    with tf.Session() as sess:   
      summary_op = tf.summary.merge_all(key='train_summary')
      train_writer = tf.summary.FileWriter('./graphs', sess.graph)
      sess.run([tf.global_variables_initializer(),tf.local_variables_initializer()])
    
      for iter in range(50):
        batch_x, batch_y = mnist.train.next_batch(1)
        loss, acc, summary = sess.run([loss_op, accuracy_op, summary_op], feed_dict={x:batch_x, y:batch_y})
        train_writer.add_summary(summary, iter)