I want to recursively assign values to slices in a Tensorflow (1.15) variable.
To illustrate, this works:
def test_loss():
m = tf.Variable(1)
n = 3
A = tf.Variable(tf.zeros([10., 20., 30.]))
B = tf.Variable(tf.ones([10., 20., 30.]))
A = A[m+1:n+1, 10:12, 20:22].assign(B[m:n, 2:4, 3:5])
return 1
test_loss()
Out: 1
Then I tried:
def test_loss():
m = tf.Variable(1)
#n = 3
A = tf.Variable(tf.zeros([10., 20., 30.]))
B = tf.Variable(tf.ones([10., 20., 30.]))
for n in range(5):
A = A[m+1:n+1, 10:12, 20:22].assign(B[m:n, 2:4, 3:5])
return 1
test_loss()
But this returns an error message:
---> 10 A = A[m+1:n+1, 10:12, 20:22].assign(B[m:n, 2:4, 3:5])
...
ValueError: Sliced assignment is only supported for variables
I understood that what 'assign' returns is not a 'Variable', therefore in the next loop pass 'A' will not find a 'Variable' anymore.
Then I tried:
def test_loss():
m = tf.Variable(1)
#n = 3
A = tf.Variable(tf.zeros([10., 20., 30.]))
B = tf.Variable(tf.ones([10., 20., 30.]))
for n in range(5):
A = tf.Variable(A[m+1:n+1, 10:12, 20:22].assign(B[m:n, 2:4, 3:5]))
return 1
test_loss()
And then I got:
InvalidArgumentError: Input 'ref' passed float expected ref type while building NodeDef...
Any idea please about I could recursively assign values to Tensorflow variable slices?
Here are some insights to using tf.Variable
and assign()
.
for n in range(5):
A = A[m+1:n+1, 10:12, 20:22].assign(B[m:n, 2:4, 3:5])
When you do A.assign(B)
, it in fact returns a tensor (i.e. not a tf.Variable
). So it works on the first iteration. From next iteration onwards, you are trying to assign values to a tf.Tensor
, which is not allowed.
for n in range(5):
A = tf.Variable(A[m+1:n+1, 10:12, 20:22].assign(B[m:n, 2:4, 3:5]))
This is again a pretty bad idea, because you're creating variables within a loop. Do this enough and you'll run out of memory. But this wouldn't even run, because you have ended up with a funky deadlock. You are trying to create variable with some tensor that will be computed when the Graph executes. And to execute the graph, you need the variables.
The best way I can think of doing this would be for test_loss
to return the update operation and you make n
a TensorFlow placeholder. And at each iteration when running the session, you pass a value to n
(which is the current iteration).
def test_loss(n):
m = tf.Variable(1)
#n = 3
A = tf.Variable(tf.zeros([10., 20., 30.]))
B = tf.Variable(tf.ones([10., 20., 30.]))
update = A[m+1:n+1, 10:12, 20:22].assign(B[m:n, 2:4, 3:5])
return update
with tf.Session() as sess:
tf_n = tf.placeholder(shape=None, dtype=tf.int32, name='n')
update_op = test_loss(tf_n)
print(type(update_op))
tf.global_variables_initializer().run()
for n in range(5):
print(1)
#print(sess.run(update_op, feed_dict={tf_n: n}))