Search code examples
python-3.xpytorchloss-functiondnn9

Customized loss function in PyTorch which uses DNN outputs and additional variables


(I am sorry if my English is not good)

I can create my own loss function in PyTorch if the function requires only DNN output vector(predicted) and DNN output vector(ground truth).

I want to use additional variables to calculate the loss.

I make my training and test data like below;

DNN input:

  1. Data_A -> processing 1 -> Data_X

DNN output:

  1. Data_A -> processing 1 -> Data_X
  2. Data_B -> processing 1 -> Data_P
  3. Data_X , Data_P -> processing 2 -> Data_Y

and I divide Data_X and Data_Y into train data and test data. x_train, x_test, y_train, y_test = train_test_split(Data_X,Data_Y,test_size=0.2, random_state=0)

I want to use Data_A, Data_B, Data_Y(predicted), and Data_Y(ground truth) to calculate the loss. I saw many examples for customized loss function which only use Data_Y(predicted) and Data_Y(ground truth). I could use such a customized loss function before. However, I don't know what to do when I want to use another additional variables. Is there a good way? Thank you for your help!


Solution

  • You have no restrictions over the structure of your loss function (as long as the gradients make sense).
    For instance, you can have:

    class MyLossLayer(nn.Module):
      def __init__(self):
        super(MyLossLayer, self).__init__()
    
      def forward(self, pred_a, pred_b, gt_target):
        # I'm just guessing here - do whatever you want as long as you do not screw the gradients.
        loss = pred_a * (pred_b - target)
        return loss.mean()