Search code examples
nlpembedding

Finetune Text embeddings using BERT?


are the text emebddings also fine-tuned when fine-tuning for classification task? Or up to which layer are the encodings fine-tuned (sencond last layer)?


Solution

  • If you are using the original BERT repository published by Google, all layers are trainable; meaning: no freezing at all. You can check that by printing tf.trainable_variables().