Search code examples
deep-learningcaffepycaffematcaffe

Tackling imbalanced class members in Caffe: weight contribution of each instance to loss value


I have a highly imbalanced data, I know that some users suggesting using InfoGainLoss loss function, however, I am facing few errors when I tried to add this function to Caffe layers.
I have the following questions, I really appreciate if someone guides me:

  1. How can I add this layer to Caffe? Does anyone know any sources/ codes of this layer?
  2. I want to apply it for image segmentation and the proportion of some classes varies. How can I create the H matrix (a stack of weights) for my images? And how infoGainLoss layer can read a specific weight matrix (H) related to that specific image?
  3. After adding the cpp and cu version of InforGainLoss layer to caffe, should I remake Caffe?

I am sorry for few question, but all are my concern and related to each other. I will be thankful to get some help and support. Thanks


Solution

  • 1.If you copy from current infogain_loss_layer.cpp you can easily adapt. For forward pass change line 59-66 like:

    // assuming num = batch size, dim = label size, image_dim = image height * width
    Dtype loss = 0;
    for (int i = 0; i < num; ++i) {
      for(int k = 0; k < image_dim; k++) {
        int label = static_cast<int>(bottom_label[i*image_dim+k]);
        for (int j = 0; j < dim; ++j) {
          Dtype prob = std::max(bottom_data[i *image_dim *dim+ k * dim + j], Dtype(kLOG_THRESHOLD));
          loss -= infogain_mat[label * dim + j] * log(prob);
        }
      }
    }
    

    Similarly for backward pass you could change line 95-101 like:

    for (int i = 0; i < num; ++i) {
      for(int k = 0; k < image_dim; k++) {
        const int label = static_cast<int>(bottom_label[i*image_dim+k]);
        for (int j = 0; j < dim; ++j) {
          Dtype prob = std::max(bottom_data[i *image_dim *dim+ k * dim + j], Dtype(kLOG_THRESHOLD));
          bottom_diff[i *image_dim *dim+ k * dim + j] = scale * infogain_mat[label * dim + j] / prob;
        }
      }
    }
    

    This is kind of naive. I don't seem to find any option for optimization. You will also need to change some setup code in reshape.

    2.In this PR suggestion is that for diagonal entries in H put min_count/|i| where |i| is the number of samples has label i. Everything else as 0. Also see this . As for loading the weight matrix H is fixed for all input. You can load it as lmdb file or in other ways.

    3.Yes you will need to rebuild.

    Update: As Shai pointed out the infogain pull for this has already been approved this week. So current version of caffe supports pixelwise infogain loss.