Recently, I try to do some experiments and I have a neural network D(x) where x is the input image with batch size 64. I want to compute the gradient of D(x) with respect to x. Should I do the computation as the following?
grad = tf.gradients(D(x), [x])
Thank you everybody!
Yes, you will need to use tf.gradients
. For more details see https://www.tensorflow.org/api_docs/python/tf/gradients.