Search code examples
pythontensorflowhuggingface-transformersonnxhuggingface-tokenizers

TypeError: an integer is required (got type NoneType)


Goal: Amend this Notebook to work with distilbert-base-uncased model

Error occurs in Section 1.3.

Kernel: conda_pytorch_p36. I did Restart & Run All, and refreshed file view in working directory.


Section 1.3:

# define the tokenizer
tokenizer = AutoTokenizer.from_pretrained(
        configs.output_dir, do_lower_case=configs.do_lower_case)

Traceback:

Evaluating PyTorch full precision accuracy and performance:
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/data/processors/glue.py:67: FutureWarning: This function will be removed from the library soon, preprocessing should be handled with the 🤗 Datasets library. You can have a look at this example script for pointers: https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py
  warnings.warn(DEPRECATION_WARNING.format("function"), FutureWarning)
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-31-1f864e3046eb> in <module>
    144 # Evaluate the original FP32 BERT model
    145 print('Evaluating PyTorch full precision accuracy and performance:')
--> 146 time_model_evaluation(model, configs, tokenizer)
    147 
    148 # Evaluate the INT8 BERT model after the dynamic quantization

<ipython-input-31-1f864e3046eb> in time_model_evaluation(model, configs, tokenizer)
    132 def time_model_evaluation(model, configs, tokenizer):
    133     eval_start_time = time.time()
--> 134     result = evaluate(configs, model, tokenizer, prefix="")
    135     eval_end_time = time.time()
    136     eval_duration_time = eval_end_time - eval_start_time

<ipython-input-31-1f864e3046eb> in evaluate(args, model, tokenizer, prefix)
     22     results = {}
     23     for eval_task, eval_output_dir in zip(eval_task_names, eval_outputs_dirs):
---> 24         eval_dataset = load_and_cache_examples(args, eval_task, tokenizer, evaluate=True)
     25 
     26         if not os.path.exists(eval_output_dir) and args.local_rank in [-1, 0]:

<ipython-input-31-1f864e3046eb> in load_and_cache_examples(args, task, tokenizer, evaluate)
    121     all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long)
    122     all_attention_mask = torch.tensor([f.attention_mask for f in features], dtype=torch.long)
--> 123     all_token_type_ids = torch.tensor([f.token_type_ids for f in features], dtype=torch.long)
    124     if output_mode == "classification":
    125         all_labels = torch.tensor([f.label for f in features], dtype=torch.long)

TypeError: an integer is required (got type NoneType)

Please let me know if there's anything else I can add to post.


Solution

  • A Dev explains this predicament at this Git Issue.

    The Notebook experiments with BERT, which uses token_type_ids.

    DistilBERT does not use token_type_ids for training.

    So, this would require re-developing the notebook; removing/ conditioning all mentions of token_type_ids for this model specifically.