I'm using hyperas to optimize a function and it is not returning the best result. During the run the print out reads as follows
100%|██████████| 100/100 [7:01:47<00:00, 411.15s/it, best loss: 5.1005506645909895e-05]
but afterwards when I print the results of the best model I get
5.8413380939757486e-05
This has happened a couple of times now and I don't understand why. I wrote a reproducible example and I am getting the same problem.
def test_function():
x={{uniform(-23,23)}}
function=x**2+x
return {'loss': function, 'status': STATUS_OK, 'model': function}
###just a dummy function to get the optimization to run, my real function uses real data
def data_example():
print('skip')
return [0,1,2]
trials=Trials()
# trials=pickle.load(open(trials_file, "rb"))
print('started new set of optimization runs')
if __name__ == '__main__':
best_run, best_model = optim.minimize(model=test_function,
data=data_example,
algo=tpe.suggest,
trials=trials,
max_evals=100)
print(best_run)
Last time I ran this the status bar showed
100%|██████████| 100/100 [00:00<00:00, 498.77it/s, best loss: -0.24773021221244024]
and the print(best_run)
showed
{'x': -0.5476422899067598}
why is my best_run
result not lining with the smallest loss in the optimization run?
Have you considered that best_run
and best loss
are not the same thing?
best_run
returns the argmin of your loss, which would indeed be x = -1/2
for f(x) = x**2+x
and best loss
returns the min value for it, which is f(-1/2) = -1/4
.