Search code examples
pythonneural-networkjupyter-notebookpytorchrecurrent-neural-network

Pytorch nll_loss returning a constant loss during training loop


I've an image binary classification problem which i want to classify weather an image is of an ant or bee. I've scraped the images and i did all the cleaning, reshaping, converting to grayscale. The images are of size 200x200 one channel grayscale. I first wanted to solve this problem using Feed Forwad NN before i jump to Conv Nets..

My problem during the training loop I am getting a constant loss I am using Adam Optimizer, F.log_softmax for the last layer in the network as well as the nll_loss function. My code so far looks as follows:

FF - Network

class Net(nn.Module):
    def __init__(self):
        super().__init__()
        self.fc1 = nn.Linear(in_features , 64)
        self.fc2 = nn.Linear(64, 64)
        self.fc3 = nn.Linear(64, 32)
        self.fc4 = nn.Linear(32, 2)
        
    def forward(self, X):
        X = F.relu(self.fc1(X))
        X = F.relu(self.fc2(X))
        X = F.relu(self.fc3(X))
        X = F.log_softmax(self.fc4(X), dim=1)
        return X
    
net = Net()

Training loop.

optimizer = torch.optim.Adam(net.parameters(), lr=0.001)
EPOCHS = 10
BATCH_SIZE = 5
for epoch in range(EPOCHS):
    print(f'Epochs: {epoch+1}/{EPOCHS}')
    for i in range(0, len(y_train), BATCH_SIZE):
        X_batch = X_train[i: i+BATCH_SIZE].view(-1,200 * 200)
        y_batch = y_train[i: i+BATCH_SIZE].type(torch.LongTensor)
        
        net.zero_grad() ## or you can say optimizer.zero_grad()
        
        outputs = net(X_batch)
        loss = F.nll_loss(outputs, y_batch)
        loss.backward()
        optimizer.step()
    print("Loss", loss)

I am suspecting that the problem maybe with my batching and the loss function. I will appreciate any help. Note: The images are gray-scale images of shape (200, 200).


Solution

  • I've been waiting for answers, but i couldn't even get a comment. I figured out myself the solution, maybe this can help someone in the future.

    class Net(nn.Module):
        def __init__(self):
            super().__init__()
            self.fc1 = nn.Linear(200 * 200 , 64) # 200 * 200 are in_features, which is an image of shape 200*200 (gray image)
            self.fc2 = nn.Linear(64, 64)
            self.fc3 = nn.Linear(64, 32)
            self.fc4 = nn.Linear(32, 2)
            
        def forward(self, X):
            X = F.relu(self.fc1(X))
            X = F.relu(self.fc2(X))
            X = F.relu(self.fc3(X))
            X = self.fc4(X) # I removed the activation function here, 
            return X
        
    net = Net()
    
    # I changed the loss function to CrossEntropyLoss() since i didn't apply activation function on the last layer
    
    loss_function = nn.CrossEntropyLoss()
    optimizer = torch.optim.Adam(net.parameters(), lr=0.001)
    
    EPOCHS = 10
    BATCH_SIZE = 5
    for epoch in range(EPOCHS):
        print(f'Epochs: {epoch+1}/{EPOCHS}')
        for i in range(0, len(y_train), BATCH_SIZE):
            X_batch = X_train[i: i+BATCH_SIZE].view(-1, 200 * 200)
            y_batch = y_train[i: i+BATCH_SIZE].type(torch.LongTensor)
            
            net.zero_grad() ## or you can say optimizer.zero_grad()
            
            outputs = net(X_batch)
            loss = loss_function(outputs, y_batch)
            loss.backward()
            optimizer.step()
        print("Loss", loss)