Search code examples
kerasdeep-learningreinforcement-learning

Can I change dynamically the learning rate of a Neural Network in Keras?


I'm trying to implement a DQN agent, so a Deep Reinforcement Learning solution.

I should decrease the learning rate after some iterations, without changing the model weights or anything else. In RL problems, the ''fit'' is done after a certain number of new events are collected, and each ''fit'' only has 1 single epoch, so the decaying rates that

at the moment, the only solution I found is doing the following:

if(time%1000==0):
    learning_rate=learning_rate*0.75
    mainQN_temp=QNetwork(hidden_size=hidden_size, learning_rate=learning_rate)
    mainQN_temp.model.load_weights("./save/dqn-angle3-"+str(t)+".h5")
    mainQN=mainQN_temp



class QNetwork:
    def __init__(self, learning_rate=0.01, state_size=4,
                 action_size=5, hidden_size=32):

        # some layers in here

    self.optimizer = Adam(lr=learning_rate)
    self.model.compile(loss='mse', optimizer=self.optimizer)

which is the most inefficient thing possible. I tried referencing things like mainQN.optimizer.lr with no luck.


Solution

  • K.set_value(model.optimizer.lr, new_lr) will do. (K as in import keras.backend as K)

    If instead you'd like to reduce lr after an arbitrary number of batches fit (i.e. train iterations), you can define a custom callback:

    class ReduceLR(keras.callbacks.Callback):
        def on_batch_end(self, batch, logs=[]):
            if K.eval(self.model.optimizer.iterations) >= 50:
                K.set_value(self.model.optimizer.lr, 1e-4)
    reduce_lr = ReduceLR()
    model.fit(x, y, callbacks=[reduce_lr])