Search code examples
machine-learningoptimizationscikit-learndeep-learninghyperopt

Hyperopt Exploration/Exploitation strategy


What kind of settings Hyperopt provides to adjust balance between exploration with exploitation ? There's something like "bandit" and "bandit_algo" in the code but no explanation.

Could someone provide any code sample.

Thanks a lot for any help!


Solution

  • I just found hyperopt partial() a magical wrapper function for the optimizer algo. It allows to balance between different strategies and then E/E:

    Partial returns the result of a randomly-chosen suggest function. For example to search by sometimes using random search, sometimes anneal, and sometimes tpe, type:

    fmin(...,
    algo=partial(mix.suggest,
    p_suggest=[
    (.1, rand.suggest),
    (.2, anneal.suggest),
    (.7, tpe.suggest),]),
    )
    

    Parameter "p_suggest": list of (probability, suggest) pairs. Make a suggestion from one of the suggest functions, in proportion to its corresponding probability. sum(probabilities) must be [close to] 1.0.

    If you want an even sharper control of algo progression: you can use the fact that hyperopt optimizer algos are stateless and return the trial object which can be provided as an input to a new fmin to continue the process. Then you can call fmin with max_evals at 1 and handle the process in a loop, therefore you could modify "trials" and "suggest algo" between each iteration.