Search code examples
pythonmachine-learningnlppytorchhuggingface-transformers

I fine tuned a pre-trained BERT for sentence classification, but i cant get it to predict for new sentences


below is the result of my fine-tuning.

Training Loss   Valid. Loss Valid. Accur.   Training Time   Validation Time
epoch                   
1   0.16    0.11    0.96    0:02:11 0:00:05
2   0.07    0.13    0.96    0:02:19 0:00:05
3   0.03    0.14    0.97    0:02:22 0:00:05
4   0.02    0.16    0.96    0:02:21 0:00:05

next i tried to use the model to predict labels from a csv file. i created a label column, set the type to int64 and run the prediction.

print('Predicting labels for {:,} test sentences...'.format(len(input_ids)))
model.eval()
# Tracking variables 
predictions , true_labels = [], []
# Predict 
for batch in prediction_dataloader:
  # Add batch to GPU
  batch = tuple(t.to(device) for t in batch)

  # Unpack the inputs from our dataloader
  b_input_ids, b_input_mask, b_labels = batch

  # Telling the model not to compute or store gradients, saving memory and 
  # speeding up prediction
  with torch.no_grad():
      # Forward pass, calculate logit predictions
      outputs = model(b_input_ids, token_type_ids=None, 
                      attention_mask=b_input_mask)

  logits = outputs[0]

  # Move logits and labels to CPU
  logits = logits.detach().cpu().numpy()
  label_ids = b_labels.to('cpu').numpy()

  # Store predictions and true labels
  predictions.append(logits)
  true_labels.append(label_ids)


however, while i am able to print out the predictions[4.235, -4.805] etc, and the true_labels[NaN,NaN.....], i am unable to actually get the predicted labels{0 or 1}. Am i missing something here?


Solution

  • The output of the models are logits, i.e., the probability distribution before normalization using softmax.

    If you take your output: [4.235, -4.805] and run softmax over it

    In [1]: import torch
    In [2]: import torch.nn.functional as F 
    In [3]: F.softmax(torch.tensor([4.235, -4.805]))
    Out[3]: tensor([9.9988e-01, 1.1856e-04])
    

    You get get 99% probability score for label 0. When you have the logits as a 2D tensor, you can easily get the classes by calling

    logits.argmax(0)
    

    The NaNs values in your true_labels are probably a bug in how you load the data, it has nothing to do with the BERT model.