Search code examples
pythontensorflowtf-slim

Unexpected behavior in model validation with tf.slim and inception_v1


I am trying to use the inception_v1 module written in tf.slim provided here to train the model on CIFAR 10 dataset.

The code to train and evaluate the model on the dataset is below.

# test_data = (data['images_test'], data['labels_test'])
    train_data = (train_x, train_y)
    val_data = (val_x, val_y)

    # create two datasets, one for training and one for test
train_dataset = tf.data.Dataset.from_tensor_slices(train_data).shuffle(buffer_size=10000).batch(BATCH_SIZE).map(preprocess)

    # train_dataset = train_dataset.shuffle(buffer_size=10000).batch(BATCH_SIZE).map(preprocess)
    val_dataset = tf.data.Dataset.from_tensor_slices(val_data).batch(BATCH_SIZE).map(preprocess)
    # test_dataset = tf.data.Dataset.from_tensor_slices(test_data).batch(BATCH_SIZE).map(preprocess)

    # create a _iterator of the correct shape and type
    _iter = tf.data.Iterator.from_structure(
            train_dataset.output_types,
            train_dataset.output_shapes
            )
    features, labels = _iter.get_next()

    # create the initialization operations
    train_init_op = _iter.make_initializer(train_dataset)
    val_init_op = _iter.make_initializer(val_dataset)
    # test_init_op = _iter.make_initializer(test_dataset)

    # Placeholders which evaluate in the session
    training_mode = tf.placeholder(shape=None, dtype=tf.bool)
    dropout_prob = tf.placeholder_with_default(1.0, shape=())
    reuse_bool = tf.placeholder_with_default(True, shape=())

    # Init the saver Object which handles saves and restores of
    # model weights
    # saver = tf.train.Saver()

    # Initialize the model inside the arg_scope to define the batch
    # normalization layer and the appropriate parameters
    with slim.arg_scope(inception_v1_arg_scope(use_batch_norm=True)) as scope:
        logits, end_points = inception_v1(features,
                                          reuse=None,
                                          dropout_keep_prob=dropout_prob,                                       is_training=training_mode)

    # Create the cross entropy loss function
    cross_entropy = tf.reduce_mean(
        tf.losses.softmax_cross_entropy(tf.one_hot(labels, 10), logits))

    train_op = tf.train.AdamOptimizer(1e-2).minimize(loss=cross_entropy)
    # train_op = slim.learning.create_train_op(cross_entropy, optimizer, global_step=)

    # Define the accuracy metric
    preds = tf.argmax(logits, axis=-1, output_type=tf.int64)
    acc = tf.reduce_mean(tf.cast(tf.equal(preds, labels), tf.float32))

    # Count the iterations for each set
    n_train_batches = train_y.shape[0] // BATCH_SIZE
    n_val_batches = val_y.shape[0] // BATCH_SIZE

    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        # saver = tf.train.Saver([v for v in tf.all_variables()][:-1])
        # for v in tf.all_variables():
        #     print(v.name)
        # saver.restore(sess, tf.train.latest_checkpoint('./', latest_filename='inception_v1.ckpt'))
        for i in range(EPOCHS):
            total_loss = 0
            total_acc = 0

            # Init train session
            sess.run(train_init_op)
            with tqdm(total=n_train_batches * BATCH_SIZE) as pbar:
                for batch in range(n_train_batches):
                    _, loss, train_acc = sess.run([train_op, cross_entropy, acc], feed_dict={training_mode: True, dropout_prob: 0.2})
                    total_loss += loss
                    total_acc += train_acc
                    pbar.update(BATCH_SIZE)
            print("Epoch: {} || Loss: {:.5f} || Acc: {:.5f} %".\
                    format(i+1, total_loss / n_train_batches, (total_acc / n_train_batches)*100))

            # Switch to validation
            total_val_loss = 0
            total_val_acc = 0
            sess.run(val_init_op)
            for batch in range(n_val_batches):
                val_loss, val_acc = sess.run([cross_entropy, acc], feed_dict={training_mode: False})
                total_val_loss += val_loss
                total_val_acc += val_acc
            print("Epoch: {} || Validation Loss: {:.5f} || Val Acc: {:.5f} %".\
                    format(i+1, total_val_loss / n_val_batches, (total_val_acc / n_val_batches) * 100))

The paradox is that I get the following results when training and evaluate the model on the validation set:

Epoch: 1 || Loss: 2.29436 || Acc: 23.61750 % │Epoch: 1 || Validation Loss: 1158854431554614016.00000 || Val Acc: 10.03000 % │100%|███████████████████████████████████████████████████| 40000/40000 [03:52<00:00, 173.21it/s] │Epoch: 2 || Loss: 1.68389 || Acc: 36.49250 % │Epoch: 2 || Validation Loss: 27997399226326712.00000 || Val Acc: 10.03000 % │100%|██████████████████████████████████████████████████▋| 39800/40000 [03:51<00:01, 174.11it/s]

I have set the training_mode to true during training and false during the validation. However, regarding the train_op that is only set in the training phase the model seems to be unset in the validation set. My guess is that the is_training variable does not handle the situation very well and does not keep the variables of the batch normalization initialized in the validation. Has anyone experienced a similar situation before?


Solution

  • I found the solution to my problem. Two things were involved in this problem. The first one was to set a smaller batch norm decay due to a smaller than imagenet dataset i should lower it to 0.99.

    batch_norm_decay=0.99

    And the other thing was to use the following line in order to keep track of the trainable parameters of batch normalization layer.

    train_op = slim.learning.create_train_op(cross_entropy, optimizer)