I've used GPyOpt to optimise a many-dimensional model
opt = BayesianOptimization(f=my_eval_func, domain=domain, constraints=constraints)
opt.run_optimization(max_iter=20)
After doing so I get retrieve the optimal co-ordinates with opt.x_opt
, and the model cost with opt.fx_opt
. However, I'm also interested in the variance of fx
at this optimal location. How do I achieve this?
I solved this for myself by applying the internal GP model to for the optimised x_opt
variable, i.e., m.model.predict(m.x_opt)
. However, the results are, I think, in some normalised and offset coordinate space, requiring a linear transformation to the expected results, e.g.,:
def get_opt_est(m):
X = []
pred_X = []
for x,y in zip(m.X, m.Y):
X.append(y[0])
pred_X.append(m.model.predict(x)[0][0])
scale = (np.max(X) - np.min(X))/(np.max(pred_X) - np.min(pred_X))
offset = np.min(X) - np.min(pred_X)*scale
pred = m.model.predict(m.x_opt)
return(pred[0][0]*scale+offset,pred[1][0]*scale)
print("Predicted loss and variance is",get_opt_est(opt))