Search code examples
pythontensorflowmachine-learningjupyter-notebookmulti-gpu

Tensorflow resume training with MirroredStrategy()


I trained my model on a Linux operating system so I could use MirroredStrategy() and train on 2 GPUs. The training stopped at epoch 610. I want to resume training but when I load my model and evaluate it the kernel dies. I am using Jupyter Notebook. If I reduce my training data set the code will run but it will only run on 1 GPU. Is my distribution strategy saved in the model that I am loading or do I have to include it again?

UPDATE

I have tried to include MirroredStrategy():

mirrored_strategy = tf.distribute.MirroredStrategy()
with mirrored_strategy.scope():

    new_model = load_model('\\models\\model_0610.h5', 
                custom_objects = {'dice_coef_loss': dice_coef_loss, 
                'dice_coef': dice_coef}, compile = True)
    new_model.evaluate(train_x,  train_y, batch_size = 2,verbose=1)

NEW ERROR

Error when I include MirroredStrategy():

ValueError: 'handle' is not available outside the replica context or a 'tf.distribute.Stragety.update()' call.

Source code:

smooth = 1
def dice_coef(y_true, y_pred):
    y_true_f = K.flatten(y_true)
    y_pred_f = K.flatten(y_pred)
    intersection = K.sum(y_true_f * y_pred_f)
    return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)

def dice_coef_loss(y_true, y_pred):
    return (1. - dice_coef(y_true, y_pred))

new_model = load_model('\\models\\model_0610.h5', 
                       custom_objects = {'dice_coef_loss': dice_coef_loss, 'dice_coef': dice_coef}, compile = True)
new_model.evaluate(train_x,  train_y, batch_size = 2,verbose=1)

observe_var = 'dice_coef'
strategy = 'max' # greater dice_coef is better
model_resume_dir = '//models_resume//'

model_checkpoint = ModelCheckpoint(model_resume_dir + 'resume_{epoch:04}.h5', 
                                   monitor=observe_var, mode='auto', save_weights_only=False, 
                                   save_best_only=False, period = 2)

new_model.fit(train_x, train_y, batch_size = 2, epochs = 5000, verbose=1, shuffle = True, 
              validation_split = .15, callbacks = [model_checkpoint])

new_model.save(model_resume_dir + 'final_resume.h5')

Solution

  • new_model.evaluate() and compile = True when loading the model were causing the problem. I set compile = False and added a compile line from my original script.

    mirrored_strategy = tf.distribute.MirroredStrategy()
    with mirrored_strategy.scope():
    
        new_model = load_model('\\models\\model_0610.h5', 
                    custom_objects = {'dice_coef_loss': dice_coef_loss, 
                    'dice_coef': dice_coef}, compile = False)
        new_model.compile(optimizer = Adam(learning_rate = 1e-4, loss = dice_coef_loss,
                    metrics = [dice_coef])