Search code examples
machine-learningscikit-learnregressiongridsearchcvgaussian-process

Optimise custom gaussian processes kernel in scikit using gridsearch


I'm working with Gaussian processes and when I use the scikit-learn GP modules I struggle to create and optimise custom kernels using gridsearchcv. The best way to describe this problem is using the classic Mauna Loa example where the appropriate kernel is constructed using a combination of already defined kernels such as RBF and RationalQuadratic. In that example the parameters of the custom kernel are not optimised but treated as given. What if I wanted to run a more general case where I would want to estimate those hyperparameters using cross-validation? How should I go about constructing the custom kernel and then the corresponding param_grid object for the grid search?

In a very naive way I could construct a custom kernel using something like this:

def custom_kernel(a,ls,l,alpha,nl):
    kernel = a*RBF(length_scale=ls) \
    + b*RationalQuadratic(length_scale=l,alpha=alpha) \
    + WhiteKernel(noise_level=nl)
    return kernel

however this function can't of course be called from gridsearchcv using e.g. GaussianProcessRegressor(kernel=custom_kernel(a,ls,l,alpha,nl)).

One possible path forward is presented in this SO question however I was wondering if there's an easier way to solve this problem than coding the kernel from scratch (along with its hyperparameters) as I'm looking to work with a combination of standard kernels and there's also the possibility that I would like to mix them up.


Solution

  • So this is how far I got. It answers the question but it is really really slow for the Mauna Loa example however that's probably a difficult dataset to work with:

    from sklearn.gaussian_process import GaussianProcessRegressor
    from sklearn.model_selection import GridSearchCV
    from sklearn.gaussian_process.kernels import ConstantKernel,RBF,WhiteKernel,RationalQuadratic,ExpSineSquared
    import numpy as np
    from sklearn.datasets import fetch_openml
    
    # from https://scikit-learn.org/stable/auto_examples/gaussian_process/plot_gpr_co2.html
    def load_mauna_loa_atmospheric_co2():
        ml_data = fetch_openml(data_id=41187)
        months = []
        ppmv_sums = []
        counts = []
    
        y = ml_data.data[:, 0]
        m = ml_data.data[:, 1]
        month_float = y + (m - 1) / 12
        ppmvs = ml_data.target
    
        for month, ppmv in zip(month_float, ppmvs):
            if not months or month != months[-1]:
                months.append(month)
                ppmv_sums.append(ppmv)
                counts.append(1)
            else:
                # aggregate monthly sum to produce average
                ppmv_sums[-1] += ppmv
                counts[-1] += 1
    
        months = np.asarray(months).reshape(-1, 1)
        avg_ppmvs = np.asarray(ppmv_sums) / counts
        return months, avg_ppmvs
    
    X, y = load_mauna_loa_atmospheric_co2()
    
    # Kernel with parameters given in GPML book
    k1 = ConstantKernel(constant_value=66.0**2) * RBF(length_scale=67.0)  # long term smooth rising trend
    k2 = ConstantKernel(constant_value=2.4**2) * RBF(length_scale=90.0) \
        * ExpSineSquared(length_scale=1.3, periodicity=1.0)  # seasonal component
    # medium term irregularity
    k3 = ConstantKernel(constant_value=0.66**2) \
        * RationalQuadratic(length_scale=1.2, alpha=0.78)
    k4 = ConstantKernel(constant_value=0.18**2) * RBF(length_scale=0.134) \
        + WhiteKernel(noise_level=0.19**2)  # noise terms
    kernel_gpml = k1 + k2 + k3 + k4
    gp = GaussianProcessRegressor(kernel=kernel_gpml)
    
    # print parameters
    print(gp.get_params())
    
    param_grid = {'alpha': np.logspace(-2, 4, 5),
                  'kernel__k1__k1__k1__k1__constant_value': np.logspace(-2, 4, 5),
                  'kernel__k1__k1__k1__k2__length_scale': np.logspace(-2, 2, 5),
                  'kernel__k2__k2__noise_level':np.logspace(-2, 1, 5)
                  }
    grid_gp = GridSearchCV(gp,cv=5,param_grid=param_grid,n_jobs=4)
    grid_gp.fit(X, y)
    

    What helped me was to initialise the model first as gp = GaussianProcessRegressor(kernel=kernel_gpml) and then use the get_params attribute in order to get a list of the model hyper parameters.

    Finally I note that in their book Rasmussen and Williams appear to have used Leave one out cross validation to tune the hyperparameters.