Search code examples
pytorchtorchtext

Finetuning embeddings with torchtext - nn.Embedding vs. nn.Embedding.from_pretrained


I have been working with pretrained embeddings (Glove) and would like to allow these to be finetuned. I currently use embeddings like this:

word_embeddingsA = nn.Embedding(vocab_size, embedding_length)
word_embeddingsA.weight = nn.Parameter(TEXT.vocab.vectors, requires_grad=False)

Should I simply set requires_grad=True to allow the embeddings to be trained? Or should I do something like this

word_embeddingsA = nn.Embedding.from_pretrained(TEXT.vocab.vectors, freeze=False)

Are these equivalent, and do I have a way to check that the embeddings are getting trained?


Solution

  • Yes they are equivalent as states in embedding:

    freeze (boolean, optional) – If True, the tensor does not get updated in the learning process. Equivalent to embedding.weight.requires_grad = False. Default: True

    If word_embeddingsA.requires_grad == True, then embedding is getting trained, else it's not.