Search code examples
pythontensorflowdeep-learningneural-networkmnist

Why does normalizing MNIST images reduce accuracy?


I am using a basic NN to train and test accuracy on MNIST dataset.

System: i5 8th Gen , GPU - Nvidia 1050Ti

Here is my code:

from __future__ import print_function,absolute_import,unicode_literals,division
import tensorflow as tf

mnist = tf.keras.datasets.mnist

(x_train,y_train) , (x_test,y_test) = mnist.load_data()
#x_train , y_train = x_train/255.0 , y_train/255.0

model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28,28)),
tf.keras.layers.Dense(312,activation='relu'),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Dense(10,activation='softmax')
])

model.compile(
optimizer='Adamax',
loss="sparse_categorical_crossentropy",
metrics=['accuracy']
)
model.fit(x_train,y_train,epochs=5)
model.evaluate(x_test,y_test)

When i normalize the images as in the 5th line, the accuracy drops horribly :

loss: 10392.0626 - accuracy: 0.0980

However when i dont normalize them, It gives :

- loss: 0.2409 - accuracy: 0.9420

In general , normalizing the data helps the grad descent to converge faster. Why is this huge difference? What am i missing?


Solution

  • Use this:

    (x_train, y_train) , (x_test,y_test) = mnist.load_data()
    x_train , x_test = x_train/255.0 , x_test/255.0
    

    You are dividing your labels by 255, so you are not normalizing properly.