Search code examples
pythontensorflowpytorchhuggingface-transformersonnx

while exporting T5 model to onnx using fastT5 getting "RuntimeError:output with shape [5, 8, 1, 2] doesn't match the broadcast shape [5, 8, 2, 2]"


i'm trying to convert T5 model to onnx using the fastT5 library, but getting an error while running the following code

from fastT5 import export_and_get_onnx_model
from transformers import AutoTokenizer

model_name = 't5-small'
model = export_and_get_onnx_model(model_name)

tokenizer = AutoTokenizer.from_pretrained(model_name)
t_input = "translate English to French: The universe is a dark forest."
token = tokenizer(t_input, return_tensors='pt')

tokens = model.generate(input_ids=token['input_ids'],
               attention_mask=token['attention_mask'],
               num_beams=2)

output = tokenizer.decode(tokens.squeeze(), skip_special_tokens=True)
print(output)

the error:

/usr/local/lib/python3.7/dist-packages/transformers/modeling_utils.py:244: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if causal_mask.shape[1] < attention_mask.shape[1]:
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-16-80094b7c4f6f> in <module>()
      7                     input_names=decoder_input_names,
      8                     output_names=decoder_output_names,
----> 9                     dynamic_axes=dyn_axis_params,
     10                     )

24 frames
/usr/local/lib/python3.7/dist-packages/transformers/models/t5/modeling_t5.py in forward(self, hidden_states, mask, key_value_states, position_bias, past_key_value, layer_head_mask, query_length, use_cache, output_attentions)
    497                 position_bias = position_bias + mask  # (batch_size, n_heads, seq_length, key_length)
    498 
--> 499         scores += position_bias
    500         attn_weights = F.softmax(scores.float(), dim=-1).type_as(
    501             scores

RuntimeError: output with shape [5, 8, 1, 2] doesn't match the broadcast shape [5, 8, 2, 2]

can someone please help me solve the issue?
thank you.


Solution

  • I've checked the repository, it looks like a known issue as reported here : https://github.com/Ki6an/fastT5/issues/1

    Developer of the library has posted a solution and created a notebook file here: https://colab.research.google.com/drive/1HuH1Ui3pCBS22hW4djIOyUBP5UW93705?usp=sharing

    Solution is to modify modeling_t5.py file, at line 494 :

    # Define this at line 426:
    int_seq_length = int(seq_length)
    
    # Change this at line 494:
    position_bias = position_bias[:, :, -seq_length:, :]
    position_bias = position_bias[:, :, -int_seq_length:, :]  # Updated version
    

    If you don't want to modify the file yourself, you will need to wait until this pull request to be merged into Transformers library.