Search code examples
pythonperformancetensorflowlogistic-regressionmnist

Speed of Logistic Regression on MNIST with Tensorflow


I am taking the CS 20SI: Tensorflow for Deep Learning Research from Stanford. I have question regarding the following code:

import time
import numpy as np
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
# Step 1: Read in data
# using TF Learn's built in function to load MNIST data to the folder data/mnist
MNIST = input_data.read_data_sets("/data/mnist", one_hot=True)

# Batched logistic regression
learning_rate = 0.01
batch_size = 128
n_epochs = 25

X = tf.placeholder(tf.float32, [batch_size, 784], name = 'image')
Y = tf.placeholder(tf.float32, [batch_size, 10], name = 'label')

#w = tf.Variable(tf.random_normal(shape = [int(shape[1]), int(Y.shape[1])], stddev = 0.01), name='weights')
#b = tf.Variable(tf.zeros(shape = [1, int(Y.shape[1])]), name='bias')

w = tf.Variable(tf.random_normal(shape=[784, 10], stddev=0.01), name="weights")
b = tf.Variable(tf.zeros([1, 10]), name="bias")

logits = tf.matmul(X,w) + b

entropy = tf.nn.softmax_cross_entropy_with_logits( logits=logits, labels=Y)
loss = tf.reduce_mean(entropy) #computes the mean over examples in the batch

optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(loss)

init = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init)
    n_batches = int(MNIST.train.num_examples/batch_size)
    for i in range(n_epochs):
        start_time = time.time()
        for _ in range(n_batches):
            X_batch, Y_batch = MNIST.train.next_batch(batch_size)
            opt, loss_ = sess.run([optimizer, loss], feed_dict = {X: X_batch, Y:Y_batch})
        end_time = time.time() 
        print('Epoch %d took %f'%(i, end_time - start_time))

On this code, logistic regression with MNIST dataset is performed. The author states:

Running on my Mac, the batch version of the model with batch size 128 runs in 0.5 second

However, when I run it, each epoch takes around 2 seconds, giving a total execution time of around a minute. Is it reasonable that this example takes that time? Currently I have a Ryzen 1700 without OC (3.0GHz) and a GPU Gtx 1080 without OC.


Solution

  • I tried this code on GTX Titan X (Maxwell) and got around 0.5 seconds per epoch. I would expect that GTX 1080 should be able to get similar results.

    Try using the latest tensorflow and cuda/cudnn versions. Make sure there are no limiting (which GPUs are visible, how much memory tensorflow can use, etc) environment variables set. You can try running a micro-benchmark to see that you can achieve the the stated FLOPS of your card, e.g. Testing GPU with tensorflow matrix multiplication