Search code examples
tensorflowscipylossgpflow

How to use a custom loss function in GPfLOW 2?


I am new to GPflow and I am trying to figure out how to write a custom loss function to optimize the model. For my purpose, I need to manipulate the predicted output of the GP through different data treatments, and thus, it is the output I get after these treatments, that I would like the optimise the GP model according to. For that purpose I would like to use the root mean square error as loss function.

Workflow: Input -> GP model -> GP_output -> Data treatment -> Predicted_output -> RMSE(Predicted_output, Observations)

I hope this makes sense.

Normally models are optimised doing something like this:

import gpflow as gf
import numpy as np

X = np.linspace(0, 100, num=100)
n = np.random.normal(scale=8, size=X.size)
y_obs = 10 * np.sin(X) + n

model = gf.models.GPR(
    data=(X, y_obs),
    kernel=gf.kernels.SquaredExponential(),
)
gf.optimizers.Scipy().minimize(
    model.training_loss, model.trainable_variables, options=optimizer_config
)

I have figured out how to do a workaround using the scipy minimize function to optimise using RMSE, but I would like to stay within the GPflow framework, where I can just input model.trainable_variables as argument, and have a general function that also works if I have multiple input/output dimensions.

def objective_func(params):
    model.kernel.lengthscales.assign(params[0])
    model.kernel.variance.assign(params[1])
    model.likelihood.variance.assign(params[2])
    
    GP_output = model.predict_y(X)[0]
    GP_output = GP_output.numpy()

    Predicted_output = data_treatment_func(GP_output)

    return np.sqrt(np.square(np.subtract(Predicted_output, y_obs)).mean())

from scipy.optimize import minimize

res = minimize(objective_func,
                x0=(1.0, 1.0, 1.0),)

Solution

  • I found the answer myself.

    If you write your objective_func using TensorFlow instead of NumPy (e.g. tf.math.sqrt, tf.reduce_mean) you can simply pass that to gf.optimizers.Scipy().minimize(...) instead of model.training_loss:

    import tensorflow as tf
    def objective_func():
        GP_output = model.predict_y(X)[0]
    
        Predicted_output = data_treatment_func(GP_output)
    
        return tf.sqrt(tf.reduce_mean(tf.square(Predicted_output - y_obs)))
    
    gf.optimizers.Scipy().minimize(
        objective_func, model.trainable_variables, options=optimizer_config
    )