Search code examples
tensorflowmachine-learningkerastensorflow2.0learning-rate

Argument must be a string or a number, not 'ExponentialDecay'


I am on Tensorflow 2.4.0, and tried to perform Exponential decay on the learning rate as follows:

learning_rate_scheduler = tf.keras.optimizers.schedules.ExponentialDecay(initial_learning_rate=0.1, decay_steps=1000, decay_rate=0.97, staircase=False)

and start the learning rate of my optimizer with such decay method:

optimizer_to_use = Adam(learning_rate=learning_rate_scheduler)

the model is compiled as follows

model.compile(loss=metrics.contrastive_loss, optimizer=optimizer_to_use, metrics=[accuracy])

The train goes well until the third epoch, where the following error is showed:

File "train_contrastive_siamese_network_inception.py", line 163, in run_experiment
    history = model.fit([pairTrain[:, 0], pairTrain[:, 1]], labelTrain[:], validation_data=([pairTest[:, 0], pairTest[:, 1]], labelTest[:]), batch_size=config.BATCH_SIZE, epochs=config.EPOCHS, callbacks=callbacks)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py", line 1145, in fit
    callbacks.on_epoch_end(epoch, epoch_logs)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/callbacks.py", line 432, in on_epoch_end
    callback.on_epoch_end(epoch, numpy_logs)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/callbacks.py", line 2542, in on_epoch_end
    old_lr = float(K.get_value(self.model.optimizer.lr))
TypeError: float() argument must be a string or a number, not 'ExponentialDecay'
 

I checked this issue was even raised in the official keras Forum, but no success even there. Plus, the documentation clearly states that:

A LearningRateSchedule instance can be passed in as the learning_rate argument of any optimizer.

What could be the issue?


Solution

  • The arguments passed in model.compile() are not in the exact way. You have defined metrics in loss parameter loss=metrics.contrastive_loss which should be tfa.losses.ContrastiveLoss()

    If you are using TensorFlow 2.4, you need to install a specific version of tensorflow_addons (between 0.10 - 0.14) to access and use tensorflow addons APIs - ContrastiveLoss

    The fixed code is:

    model.compile(loss = tfa.losses.ContrastiveLoss(), 
                  optimizer = optimizer_to_use, 
                  metrics = ['accuracy'])
    

    (Attaching the replicated code gist here for your reference.)