Search code examples
tensorflowgradientnonetype

tf.gradient returning None values


I am trying to setup integrated gradients on my multi-input arm LSTM classifier, however when calling tensorflow gradient it is returning all None values. I am not really sure what the problem is, it is occurring on the 2nd lot of gradients located at variable g with the print statement after.

I'm using tensorflow version 2.10.0

Here is my code:

`baseline_var = np.zeros((1, 12, len(variables)))
baseline_static = np.zeros((1, len(statics)))

baseline = [baseline_static, baseline_var]
inputs = [churn_static_data.iloc[:1,:].values, churn_var_data[:1,:,:]]

model = lstm

steps = 10

inputs_1, inputs_2 = inputs
baseline_1, baseline_2 = baseline

inputs_1 = tf.convert_to_tensor(inputs_1)
inputs_2 = tf.convert_to_tensor(inputs_2)
baseline_1 = tf.convert_to_tensor(baseline_1)
baseline_2 = tf.convert_to_tensor(baseline_2)

with tf.GradientTape() as tape:
    tape.watch(inputs_1)
    tape.watch(inputs_2)
    predictions = model([inputs_1, inputs_2])

grads_1, grads_2 = tape.gradient(predictions, [inputs_1, inputs_2])

inputs_1 = tf.cast(inputs_1, tf.float32)
inputs_2 = tf.cast(inputs_2, tf.float32)
baseline_1 = tf.cast(baseline_1, tf.float32)
baseline_2 = tf.cast(baseline_2, tf.float32)

path_1 = [baseline_1 + (float(i) / steps) * (inputs_1 - baseline_1) for i in range(steps+1)]
path_1 = tf.stack(path_1)

path_gradients_1 = []
for i in range(steps+1):
    with tf.GradientTape() as tape:
        tape.watch(path_1[i])
        predictions = model([path_1[i], inputs_2])
        g = tape.gradient(predictions, path_1[i])
        print(g)
        path_gradients_1.append(g)

path_gradients_1 = tf.stack(path_gradients_1)
avg_gradients_1 = tf.reduce_mean(path_gradients_1, axis=0)
integrated_gradients_1 = (inputs_1 - baseline_1) * avg_gradients_1
integrated_gradients_1 = tf.reduce_sum(integrated_gradients_1, axis=-1)

path_2 = [baseline_2 + (float(i) / steps) * (inputs_2 - baseline_2) for i in range(steps+1)]
path_2 = tf.stack(path_2)

path_gradients_2 = []
for i in range(steps+1):
    with tf.GradientTape() as tape:
        tape.watch(path_2[i])
        predictions = model([inputs_1, path_2[i]])[0:1,0:1]
        path_gradients_2.append(tape.gradient(predictions, path_2[i]))

path_gradients_2 = tf.stack(path_gradients_2)
avg_gradients_2 = tf.reduce_mean(path_gradients_2, axis=0)
integrated_gradients_2 = (inputs_2 - baseline_2) * avg_gradients_1
integrated_gradients_2 = tf.reduce_sum(integrated_gradients_2, axis=-1)`

Many thanks!

I've tried the above code but the gradient is returning None


Solution

  • So I managed to find a workaround and I believe sussed out the issue.

    What I think is the problem is that not all the variables changed from one point to the next (ie. if the baseline for one variable is 0, but then the input to move towards also is 0, this will cause issues in terms of dividing by 0).

    To resolve this, I think I've implemented a version which uses only numpy.

            path_2 = [baseline_2 + (float(i) / steps) * (inputs_2 - baseline_2) for i in range(steps+1)]
            path_2 = tf.stack(path_2)
    
            path_gradients_2 = []
            p = model([inputs_1, path_2[0]])[0:1,0]
            prediction_save = [p.numpy()]
            for i in range(1,steps+1):
                p = model([inputs_1, path_2[i]])[0:1,0]
                prediction_save.append(p.numpy())
                dy = prediction_save[i-1]-prediction_save[i]
                dx = path_2[i-1].numpy() - path_2[i].numpy()
                with np.errstate(divide='ignore', invalid='ignore'):
                    dydx = (dy/dx)
                dydx[dydx==np.inf] = 0
                path_gradients_2.append(dydx)
    
            stacked_gradients_2 = np.stack(path_gradients_2)
            avg_gradients_2 = np.mean(stacked_gradients_2, axis=0)
            avg_gradients_2[(avg_gradients_2==np.inf)|(avg_gradients_2==-np.inf)] = 0
            integrated_gradients_2 = (inputs_2.numpy() - baseline_2.numpy()) * avg_gradients_2
            integrated_gradients_2 = np.sum(integrated_gradients_2, axis=1)