Search code examples
pythonpytorchnlphuggingface-transformershuggingface

HuggingFace Transformers Trainer._maybe_log_save_evaluate IndexError: invalid index to scalar variable


So, I'm working on fine tuning a BART model for question generation, and it seems to be going through training okay. Then all of a sudden, it stops at the end of the first validation with an IndexError which you can see below. The problem is occurring in the Trainer._maybe_log_save_evaluate method that is being called.

IndexError: invalid index to scalar variable.

Here is my code for setting up the model, tokenizer, dataset, etc.:

from datasets import load_dataset
from evaluate import load
from accelerate import Accelerator
from transformers import BartForConditionalGeneration, BartConfig, BartTokenizer
from transformers import Seq2SeqTrainingArguments, Seq2SeqTrainer 

dataset = load_dataset("squad")
metric = load("squad")
accelerator = Accelerator()

def model_init():
  config = BartConfig()
  return accelerator.prepare(BartForConditionalGeneration(config).from_pretrained("facebook/bart-base").cuda())

tokenizer = accelerator.prepare(BartTokenizer.from_pretrained("facebook/bart-base"))

def preprocess_function(data):
  inputs = tokenizer(data['context'], add_special_tokens=True, max_length=256, padding="max_length", truncation=True)
  targets = tokenizer(data['question'], add_special_tokens=True, max_length=32, padding="max_length", truncation=True)
  return {'input_ids': inputs['input_ids'], 'attention_mask': inputs['attention_mask'], 'labels': targets['input_ids']}

dataset = dataset.map(preprocess_function, batched=True).shuffle(seed=777)

training_args = Seq2SeqTrainingArguments(
  output_dir="./results",
  evaluation_strategy="steps",
  eval_steps=500,
  save_steps=50000,
  learning_rate=2e-5,
  per_device_train_batch_size=4,
  per_device_eval_batch_size=4,
  num_train_epochs=2,
  weight_decay=0.01,
  predict_with_generate=True,
)

def compute_metrics(eval_pred):
  predictions, labels = eval_pred
  predictions = predictions.argmax(axis=-1)
  return metric.compute(predictions=predictions, references=labels)

trainer = Seq2SeqTrainer(
  args=training_args,
  train_dataset=dataset["train"],
  eval_dataset=dataset["validation"],
  tokenizer=tokenizer,
  model_init=model_init,
  compute_metrics=compute_metrics,
)

trainer.train()

I can't seem to figure out why this is happening and nothing I've found online has helped.


Solution

  • Your issue comes from your compute_metrics function as you're using a QA metric with a text-generation model.

    To fix it, replace metric = load("squad") with a text-generation metric, for example bleu: metric = load("bleu"). And adapt your compute_metrics function in consequence:

    def compute_metrics(eval_pred):
        predictions, references = eval_pred
        predictions = tokenizer.batch_decode(predictions)
        references = tokenizer.batch_decode(references)
        references = [[ref] for ref in references]
        return metric.compute(predictions=predictions, references=references)