Search code examples
pythonnlpgpuhuggingface-transformers

understanding gpu usage huggingface classification - Total optimization steps


I am training huggingface longformer for a classification problem and got below output.

  1. I am confused about Total optimization steps. As I have 7000 training data points and 5 epochs and Total train batch size (w. parallel, distributed & accumulation) = 64, shouldn't I get 7000*5/64 steps? that comes to 546.875? why is it showing Total optimization steps = 545

  2. Why in the below output, there are 16 steps of Input ids are automatically padded from 1500 to 1536 to be a multiple of config.attention_window: 512 then [ 23/545 14:24 < 5:58:16, 0.02 it/s, Epoch 0.20/5]? what are these steps?

==========================================================

***** Running training *****
  Num examples = 7000
  Num Epochs = 5
  Instantaneous batch size per device = 4
  Total train batch size (w. parallel, distributed & accumulation) = 64
  Gradient Accumulation steps = 16
  Total optimization steps = 545
Initializing global attention on CLS token...
Input ids are automatically padded from 1500 to 1536 to be a multiple of `config.attention_window`: 512
Initializing global attention on CLS token...
Input ids are automatically padded from 1500 to 1536 to be a multiple of `config.attention_window`: 512
Initializing global attention on CLS token...
Input ids are automatically padded from 1500 to 1536 to be a multiple of `config.attention_window`: 512
Initializing global attention on CLS token...
Input ids are automatically padded from 1500 to 1536 to be a multiple of `config.attention_window`: 512
Initializing global attention on CLS token...
Input ids are automatically padded from 1500 to 1536 to be a multiple of `config.attention_window`: 512
Initializing global attention on CLS token...
Input ids are automatically padded from 1500 to 1536 to be a multiple of `config.attention_window`: 512
Initializing global attention on CLS token...
Input ids are automatically padded from 1500 to 1536 to be a multiple of `config.attention_window`: 512
Initializing global attention on CLS token...
Input ids are automatically padded from 1500 to 1536 to be a multiple of `config.attention_window`: 512
Initializing global attention on CLS token...
Input ids are automatically padded from 1500 to 1536 to be a multiple of `config.attention_window`: 512
Initializing global attention on CLS token...
Input ids are automatically padded from 1500 to 1536 to be a multiple of `config.attention_window`: 512
Initializing global attention on CLS token...
Input ids are automatically padded from 1500 to 1536 to be a multiple of `config.attention_window`: 512
Initializing global attention on CLS token...
Input ids are automatically padded from 1500 to 1536 to be a multiple of `config.attention_window`: 512
Initializing global attention on CLS token...
Input ids are automatically padded from 1500 to 1536 to be a multiple of `config.attention_window`: 512
Initializing global attention on CLS token...
Input ids are automatically padded from 1500 to 1536 to be a multiple of `config.attention_window`: 512
Initializing global attention on CLS token...
Input ids are automatically padded from 1500 to 1536 to be a multiple of `config.attention_window`: 512
Initializing global attention on CLS token...
Input ids are automatically padded from 1500 to 1536 to be a multiple of `config.attention_window`: 512
 [ 23/545 14:24 < 5:58:16, 0.02 it/s, Epoch 0.20/5]
Epoch   Training Loss   Validation Loss

#update

adding Trainer and TrainingArguments

#class weights
class CustomTrainer(Trainer):
    def compute_loss(self, model, inputs, return_outputs=False):
        labels = inputs.get("labels")
        # forward pass
        outputs = model(**inputs)
        logits = outputs.get("logits")
        # compute custom loss (suppose one has 3 labels with different weights)
        loss_fct = nn.CrossEntropyLoss(weight=torch.tensor([1.0, 0.5243])).to(device)
        loss = loss_fct(logits.view(-1, self.model.config.num_labels), labels.view(-1)).to(device)
        return (loss, outputs) if return_outputs else loss

 trainer = CustomTrainer(
        model=model,
        args=training_args,
        compute_metrics=compute_metrics,
        train_dataset=train_df_tuning_dataset_tokenized,
        eval_dataset=val_dataset_tokenized
    )



# define the training arguments
training_args = TrainingArguments(
    
    
num_train_epochs = 5,# changed this from 5
per_device_train_batch_size = 4,#4,#8,
gradient_accumulation_steps = 16,
per_device_eval_batch_size= 16,#16
evaluation_strategy = "epoch",

save_strategy = "epoch",
learning_rate=2e-5,
load_best_model_at_end=True,
greater_is_better=False,

disable_tqdm = False, 

weight_decay=0.01,
optim="adamw_torch",#removing on 18 march from huggingface example notebook
run_name = 'longformer-classification-16March2022'
)

Solution

  • 1. Why 545 optimization steps?

    Looking at the implementation of the transformers package, we see that the Trainer uses a variable called max_steps when printing the Total optimization steps message in the train method:

    logger.info("***** Running training *****")
    logger.info(f"  Num examples = {num_examples}")
    logger.info(f"  Num Epochs = {num_train_epochs}")
    logger.info(f"  Instantaneous batch size per device = {args.per_device_train_batch_size}")
    logger.info(f"  Total train batch size (w. parallel, distributed & accumulation) = {total_train_batch_size}")
    logger.info(f"  Gradient Accumulation steps = {args.gradient_accumulation_steps}")
    logger.info(f"  Total optimization steps = {max_steps}")
    

    Permalink to the above snippet in the transformers repo

    The Trainer has the following bit of code earlier in the train method:

    class Trainer:
        [...]
        def train(self) -> None:
            [Some irrelevant code ommited here...]
    
            total_train_batch_size = args.train_batch_size * args.gradient_accumulation_steps * args.world_size
            if train_dataset_is_sized:
                num_update_steps_per_epoch = len(train_dataloader) // args.gradient_accumulation_steps
                num_update_steps_per_epoch = max(num_update_steps_per_epoch, 1)
                if args.max_steps > 0:
                    max_steps = args.max_steps
                    num_train_epochs = args.max_steps // num_update_steps_per_epoch + int(
                        args.max_steps % num_update_steps_per_epoch > 0
                    )
                    # May be slightly incorrect if the last batch in the training datalaoder has a smaller size but it's
                    # the best we can do.
                    num_train_samples = args.max_steps * total_train_batch_size
                else:
                    max_steps = math.ceil(args.num_train_epochs * num_update_steps_per_epoch)
                    num_train_epochs = math.ceil(args.num_train_epochs)
                    num_train_samples = len(self.train_dataset) * args.num_train_epochs
    

    Permalink to the above snippet in the transformers repo

    total_train_batch_size = args.train_batch_size * args.gradient_accumulation_steps * args.world_size in your example will be equal to total_train_batch_size = 4 * 16 * 1 = 64, as expected.

    Then we have num_update_steps_per_epoch = len(train_dataloader) // args.gradient_accumulation_steps which will give us num_update_steps_per_epoch = len(train_dataloader) // 16.

    Now the length of a DataLoader is equal to the number of batches in that DataLoader. Since you have 7000 samples and we have a per_device_train_batch_size of 4, this will give us 7000 / 4 = 1750 batches. Going back to num_update_steps_per_epoch We now have num_update_steps_per_epoch = 1750 // 16 = 109 (Python integer division takes the floor)

    You don't have a number of max steps specified so then we get to max_steps = math.ceil(args.num_train_epochs * num_update_steps_per_epoch) which gives us max_steps = math.ceil(5 * 109) = 545.

    2. Why does the padding operation get logged 16 times?

    In a transformers architecture, you technically don't have to pad all your samples to be the same length. What actually matters is that samples within a batch are the same length, that length can differ from batch to batch.

    This means that this message will appear for every batch that goes through a forward pass. As to why the message appeared 16 times even though 23 batches have actually gone through a forward pass I can think of two possible reasons:

    1. The logging of the padding operation and the logging of the progress bar are happening on two different threads and the former is lagging behind a bit
    2. (Extremely unlikely) you had batches that did not need to be padded because all samples had the same length and that length was a multiple of 512 already.