Search code examples
keraspytorch

What is the equivalent of keras NonNeg weight constraint?


Keras has an option to force the weights of the learned model to be positive:

tf.keras.constraints.NonNeg()

But I couldn't find the equivalent of this in pytorch, does anyone know how can I force my linear model's weights to be all positives?

Tried asking this on other forums but the answers were not helpful.

Let's say I have a very simple linear model as shown below, how should I change it?

class Classifier(nn.Module):

    def __init__(self,input , n_classes):
        super(Classifier, self).__init__()

        self.classify = nn.Linear( input  , n_classes)

     def forward(self, h ):

        final = self.classify(h)
        return final

I want to do exactly what the NonNeg() does but in pytorch, don't want to change what its doing.

This is the implementation of NonNeg in keras:

class NonNeg(Constraint):
    """Constrains the weights to be non-negative.
    """

    def __call__(self, w):
        w *= K.cast(K.greater_equal(w, 0.), K.floatx())
        return w

Solution

  • The suggested answer is wrong. You cannot simply use torch.abs here because absolute function is a non-monotonic mapping. Both negative and positive input values will give the same output value. The correct way to approach this problem is as follows:

    import torch
    import torch.nn as nn
    
    class PosLinear(nn.Module):
        def __init__(self, in_dim, out_dim):
            super(PosLinear, self).__init__()
            self.weight = nn.Parameter(torch.randn((in_dim, out_dim)))
            self.bias = nn.Parameter(torch.zeros((out_dim,)))
            
        def forward(self, x):
            return torch.matmul(x, torch.exp(self.weight)) + self.bias
    

    The idea is to find a monotonic mapping between self.weight and the coefficient used for the linear regression.