Here's a simple LSTM model in Keras:
input = Input(shape=(max_len,))
model = Embedding(input_dim=input_dim, output_dim=embed_dim, input_length=max_len)(input)
model = Dropout(0.1)(model)
model = Bidirectional(LSTM(units=blstm_dim, return_sequences=True, recurrent_dropout=0.1))(model)
out =Dense(label_dim, activation="softmax")(model)
Here's my attempt to translate it to a Pytorch model:
class RNN(nn.Module):
def __init__(self, input_dim, embed_dim, blstm_dim, label_dim):
super(RNN, self).__init__()
self.embed = nn.Embedding(input_dim, embed_dim)
self.blstm = nn.LSTM(embed_dim, blstm_dim, bidirectional=True, batch_first=True)
self.fc = nn.Linear(2*blstm_dim, label_dim)
def forward(self, x):
h0 = torch.zeros(2, x.size(0), blstm_dim).to(device)
c0 = torch.zeros(2, x.size(0), blstm_dim).to(device)
x = self.embed(x)
x = F.dropout(x, p=0.1, training=self.training)
x,_ = self.blstm(x, (h0, c0))
x = self.fc(x)
return F.softmax(x, dim=1)
# return x
Now running the Keras model gives the following:
Epoch 5/5
38846/38846 [==============================] - 87s 2ms/step - loss: 0.0374 - acc: 0.9889 - val_loss: 0.0473 - val_acc: 0.9859
But running the PyTorch model gives this:
Train Epoch: 10/10 [6400/34532 (19%)] Loss: 2.788933
Train Epoch: 10/10 [12800/34532 (37%)] Loss: 2.788880
Train Epoch: 10/10 [19200/34532 (56%)] Loss: 2.785547
Train Epoch: 10/10 [25600/34532 (74%)] Loss: 2.796180
Train Epoch: 10/10 [32000/34532 (93%)] Loss: 2.790446
Validation: Average loss: 0.0437, Accuracy: 308281/431600 (71%)
I've made sure the loss & optimiser are the same (cross entropy & RMSprop). Now interestingly if I remove the softmax from the PyTorch model (i.e. use the hashed output in the code, I get what seems to be right:
Train Epoch: 10/10 [32000/34532 (93%)] Loss: 0.022118
Validation: Average loss: 0.0009, Accuracy: 424974/431600 (98%)
So here are my questions:
1) Are the two models I've printed above equivalent (let's ignore the recurrent_dropout since I haven't figure out how to do that in PyTorch)?
2) What am I doing wrong with the softmax output layer in PyTorch?
Thanks a lot!
Besides the dropout I can see no difference. So they should be completely equivalent in terms of structure.
One note: You don't have to initialize the states if you use it this way (if you're not reusing the states). You can just forward the LSTM with x,_ = self.blstm(x)
- it will automatically initialize states with zeros.
PyTorch torch.nn.CrossEntropyLoss
already includes softmax:
This criterion combines
nn.LogSoftmax()
andnn.NLLLoss()
in one single class.
So it is actually a CE with logits. I guess this makes it more efficient. So you can just leave out the softmax activation at the end.