Search code examples
neural-networktorchrecurrent-neural-network

In torch, how can I fix the pre-trained embedding in training time?


I am trying to train an RNN on pre-trained word embeddings. Suppose these pre-trained embedding are kept in a matrix E, which I can use to initialize a LookupTable:

lookupTable = nn.LookupTable(n_words, d)
lookupTable.weight = E

How can I force the model to keep these embeddings fixed during training?


Solution

  • Maybe two possibilities :

    1. Force the weights for this layer to stay at 1 at each mini batch iteration

    2. Implement your own lookuptable by extending the nn.LookupTable and override updateOutput so that the weights are not updated.