Search code examples
tensorflowrestoredeep-learningcheckpoint

tensorflow cifar10 resume training from checkpoint file


While using Tensorflow, I am trying to resume CIFAR10 training using a checkpointed file. Referencing some other articles, I tried tf.train.Saver().restore with no success. Can someone shed me light on how to proceed?

Code snippet from Tensorflow CIFAR10

def train():
  # methods to build graph from the cifar10_train.py
  global_step = tf.Variable(0, trainable=False)
  images, labels = cifar10.distorted_inputs()
  logits = cifar10.inference(images)
  loss = cifar10.loss(logits, labels)
  train_op = cifar10.train(loss, global_step)
  saver = tf.train.Saver(tf.all_variables())
  summary_op = tf.merge_all_summaries()

  init = tf.initialize_all_variables() 
  sess = tf.Session(config=tf.ConfigProto(log_device_placement=FLAGS.log_device_placement))
  sess.run(init)


  print("FLAGS.checkpoint_dir is %s" % FLAGS.checkpoint_dir)

  if FLAGS.checkpoint_dir is None:
    # Start the queue runners.
    tf.train.start_queue_runners(sess=sess)
    summary_writer = tf.train.SummaryWriter(FLAGS.train_dir, sess.graph)
  else:
    # restoring from the checkpoint file
    ckpt = tf.train.get_checkpoint_state(FLAGS.checkpoint_dir)
    tf.train.Saver().restore(sess, ckpt.model_checkpoint_path)

  # cur_step prints out well with the checkpointed variable value
  cur_step = sess.run(global_step);
  print("current step is %s" % cur_step)

  for step in xrange(cur_step, FLAGS.max_steps):
    start_time = time.time()
    # **It stucks at this call **
    _, loss_value = sess.run([train_op, loss])
    # below same as original

Solution

  • The problem seems to be that this line:

    tf.train.start_queue_runners(sess=sess)
    

    ...is only executed if FLAGS.checkpoint_dir is None. You will still need to start the queue runners if you are restoring from a checkpoint.

    Note that I'd recommend you start the queue runners after creating the tf.train.Saver (due to a race condition in the released version of the code), so a better structure would be:

    if FLAGS.checkpoint_dir is not None:
      # restoring from the checkpoint file
      ckpt = tf.train.get_checkpoint_state(FLAGS.checkpoint_dir)
      tf.train.Saver().restore(sess, ckpt.model_checkpoint_path)
    
    # Start the queue runners.
    tf.train.start_queue_runners(sess=sess)
    
    # ...
    
    for step in xrange(cur_step, FLAGS.max_steps):
      start_time = time.time()
      _, loss_value = sess.run([train_op, loss])
      # ...