With the release of the 1.5 stable version of the C++ API for PyTorch, there are some changes in some of the object interfaces. For instance, now
optimizer.options.learning_rate();
won't work (here the optimiser being used is Adam) since learning_rate
has changed to lr
(see https://github.com/pytorch/pytorch/releases) but moreover the optimiser no longer has options (no member named 'options' in 'torch::optim::Adam'
). So my question is: how would one run
optimizer.options.learning_rate();
or update the learning rate
optimizer.options.learning_rate(updatedlearningrate);
with the new release? Any help will be appreciated! Thank you
The optimisers now behave like their Python counterparts and the learning rates need to be set per parameter group.
for (auto param_group : optimizer.param_groups()) {
# Static cast needed as options() returns OptimizerOptions (base class)
static_cast<torch::optim::AdamOptions &>(param_group.options()).lr(new_lr);
}
If you didn't specify separate parameter groups, there will be only a single group and you could directly set its learning rate as suggested in Issue #35640 - How do you change Adam learning rate since the latest commits?:
static_cast<torch::optim::AdamOptions &>(optimizer.param_groups()[0].options()).lr(new_lr)