I am trying to make a seq2seq model using tfa.seq2seq.BaseDecoder
in TensorFlow 2.1. I have
tf.keras.layers.GRU(64)(inputs, [states])
where inputs has shape (batch_size, 1, embedding_dimension)
and comes from
inputs = tf.keras.layers.Embedding(1000, 64, mask_zero=True)(tf.fill([batch_size, 1], value=1))
and states
are the encoder hidden states for the batch.
I am implementing tfa.seq2seq.BaseDecoder
's initialize
, step
and some properties and the error is happening in step
which contains the line that I have copied out here.
However, it gives me the following error message (some function names are changed to make explaining the question easier and are slightly different in the code).
Traceback (most recent call last):
File "/home/.local/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py", line 2659, in _set_inputs
outputs = self(inputs, **kwargs)
File "/home/.local/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 773, in __call__
outputs = call_fn(cast_inputs, *args, **kwargs)
File "/home/.local/lib/python3.7/site-packages/tensorflow_core/python/autograph/impl/api.py", line 237, in wrapper
raise e.ag_error_metadata.to_exception(e)
TypeError: in converted code:
/home/lemmatizer_noattn.py:155 call *
output_layer, _, output_lens, _ = self.DecoderTraining((source_states, target_charseqs), True)
/home/.local/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py:785 __call__
str(e) + '\n"""')
TypeError: You are attempting to use Python control flow in a layer that was not declared to be dynamic. Pass `dynamic=True` to the class constructor.
Encountered error:
"""
in converted code:
/home/.local/lib/python3.7/site-packages/tensorflow_addons/seq2seq/decoder.py:162 call *
return dynamic_decode(
/home/.local/lib/python3.7/site-packages/tensorflow_addons/seq2seq/decoder.py:405 body *
(next_outputs, decoder_state, next_inputs, decoder_finished) = decoder.step(
/home/.local/lib/python3.7/site-packages/tensorflow_core/python/ops/control_flow_ops.py:2478 while_loop_v2
return_same_structure=True)
/home/lemmatizer_noattn.py:79 step *
outputs, [states] = self.lemmatizer.target_rnn_cell(inputs, [states])
/home/.local/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:539 __iter__
self._disallow_iteration()
/home/.local/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:535 _disallow_iteration
self._disallow_in_graph_mode("iterating over `tf.Tensor`")
/home/.local/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:515 _disallow_in_graph_mode
" this function with @tf.function.".format(task))
OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function.
"""
I didn't manage to figure out from the documentation where the error might be comming from nor did I find any advice on the internet. Any ideas on where the problem might be?
This line looks like an expansion of a tuple and a list, which autograph doesn't expand (resulting in this confusing error):
/home/lemmatizer_noattn.py:79 step *
outputs, [states] = self.lemmatizer.target_rnn_cell(inputs, [states])
What does self.lemmatizer.target_rnn_cell
return? Try printing the value before expanding it, something like this:
retval = self.lemmatizer.target_rnn_cell(inputs, [states])
print(retval)
inputs = retval[0]
states = retval[1][0]