Search code examples
pythontensorflowneural-networkreinforcement-learningsoftmax

TensorFlow reinforcement learning softmax layer


I have a problem with TensorFlow Code. Here is a piece of code that I used in my previous environment - Cart-pole problem

initializer = tf.contrib.layers.variance_scaling_initializer()

X = tf.placeholder(tf.float32, shape=[None, n_inputs])

hidden = tf.layers.dense(X, n_hidden, activation=tf.nn.elu, kernel_initializer=initializer)
logits = tf.layers.dense(hidden, n_outputs)
outputs = tf.nn.sigmoid(logits)  

p_left_and_right = tf.concat(axis=1, values=[outputs, 1 - outputs])
action = tf.multinomial(tf.log(p_left_and_right), num_samples=1)

y = 1. - tf.to_float(action)

cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=logits)
optimizer = tf.train.AdamOptimizer(learning_rate)
grads_and_vars = optimizer.compute_gradients(cross_entropy)

There was two possible discrete decisions (right and left move).

I had decision given by sigmoid layer, which later was randomly selected with probabilities given by that layer.

Now I have environment with three discrete possible decisions, so I tried with softmax layer and it not work. When I start TensorFlow session. The code is like that:

initializer = tf.contrib.layers.variance_scaling_initializer()

X = tf.placeholder(tf.float32, shape=[None, n_inputs])

hidden = tf.layers.dense(X, n_hidden, activation=tf.nn.elu, kernel_initializer=initializer)

logits = tf.layers.dense(hidden, n_outputs)

outputs = tf.nn.softmax(logits)  

p_left_and_right = tf.concat(axis=3, values=[outputs])
action = tf.multinomial(tf.log(p_left_and_right), num_samples=1)

y = 1. - tf.to_float(action)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(labels=y, logits=logits)
optimizer = tf.train.AdamOptimizer(learning_rate)
grads_and_vars = optimizer.compute_gradients(cross_entropy)

How should I change or improve it, to achieve suitable result and correct/better TensorFlow's code


Solution

  • The easiest solution of problem is changing the cross entropy function. I changed it to sparse_softmax_cross_entropy_with_logits , which doesn't need labels in one hot encoding format.

    initializer = tf.contrib.layers.variance_scaling_initializer()
    
    X = tf.placeholder(tf.float32, shape=[None, n_inputs])
    
    hidden = tf.layers.dense(X, n_hidden, activation=tf.nn.elu, kernel_initializer=initializer)
    
    logits = tf.layers.dense(hidden, n_outputs)
    
    action = tf.multinomial(logits, num_samples=1)
    
    
    cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels= action[0], logits=logits)