I have taken the pretrained model of MoviNet, I have changed the last layer.
This is last parameters of pretrained model that I have taken;
classifier.0.conv_1.conv2d.weight : torch.Size([2048, 640, 1, 1])
classifier.0.conv_1.conv2d.bias : torch.Size([2048])
classifier.3.conv_1.conv2d.weight : torch.Size([600, 2048, 1, 1])
classifier.3.conv_1.conv2d.bias : torch.Size([600])
The following are the parameters that I have changed at the last layer;
clfr.0.multi_head.0.head2.0.conv_1.conv2d.weight : torch.Size([2048, 640, 1, 1])
clfr.0.multi_head.0.head2.0.conv_1.conv2d.bias : torch.Size([2048])
clfr.0.multi_head.0.head1.weight : torch.Size([600, 2048, 1, 1])
clfr.0.multi_head.0.head1.bias : torch.Size([600])
I want to train only classifier (clfr) based on previous layer weights, and freeze all previous laers in pytorch, can anyone one tell me how canI do this?
When creating your optimizer, only pass the parameters that you want to update during training. In your example, it could look something like:
optimizer = torch.optim.Adam(clfr.parameters())