Search code examples
pythonmachine-learningkerasregressionhyperas

Hyperas loss function for regression problem


I've built a model using Keras for solving the regression problem. I want to perform a hyperparameter optimization on this model. As metrics I used val_mean_absolute_error. In the example, only classification problem is covered (available on https://github.com/maxpumperla/hyperas)

validation_acc = np.amax(result.history['val_acc']) 
print('Best validation acc of epoch:', validation_acc)
return {'loss': -validation_acc, 'status': STATUS_OK, 'model': model}

How to adapt this code for a regression problem (for using val_mean_absolute_error as a metrics)?


Solution

  • For regression problems, we usually do not define a separate metric, using the loss itself to assess model performance (the lower the better); so, assuming you are using mae as your loss, and you have compiled your model as

    model.compile(loss='mae', optimizer={{choice(['rmsprop', 'adam', 'sgd'])}})
    

    this is how you should modify the code from the linked example:

    #get the lowest validation loss of the training epochs
    validation_loss = np.amin(result.history['val_loss']) 
    print('Best validation loss of epoch:', validation_loss)
    return {'loss': validation_loss, 'status': STATUS_OK, 'model': model}
    

    It's true that some people add a compilation argument for metrics=['mae'] in similar cases, but this is unnecessary.