Search code examples
pythontensorflowdeep-learningpython-3.6gradient-descent

TensorFlow : optimizer gives nan as ouput


I am running a very simple tensorflow program

W = tf.Variable([.3],tf.float32)
b = tf.Variable([-.3],tf.float32)
x = tf.placeholder(tf.float32)

linear_model = W*x + b

y = tf.placeholder(tf.float32)

squared_error = tf.square(linear_model - y)

loss = tf.reduce_sum(squared_error)

optimizer = tf.train.GradientDescentOptimizer(0.1)

train = optimizer.minimize(loss)

init = tf.global_variables_initializer()

with tf.Session() as s:
    file_writer = tf.summary.FileWriter('../../tfLogs/graph',s.graph)
    s.run(init)
    for i in range(1000):
        s.run(train,{x:[1,2,3,4],y:[0,-1,-2,-3]})
    print(s.run([W,b]))

this gives me

[array([ nan], dtype=float32), array([ nan], dtype=float32)]

what am i doing wrong?


Solution

  • You're using loss = tf.reduce_sum(squared_error) instead of reduce_mean. With reduce_sum your loss gets bigger when you have more data, and even with this small example it means your gradient is big enough to cause your model to diverge.

    Something else which can cause this type of problem is when your learning rate is too large. In this case you can also fix it by changing your learning rate from 0.1 to 0.01, but if you're still using reduce_sum it will break again when you add more points.