Search code examples
machine-learninghyperparameters

How to tune machine learning hyperparameters using MOE?


I am trying to use MOE, the "Metric Optimization Engine" created at Yelp, to tune hyperparameters for a machine learning algorithm. Their documentation is a bit limited and I'm having a hard time finding examples to follow.

Say that I would like to find the optimal values for C, Gamma, and kernel type for a Support Vector Machine, based on the following distributions:

SVC_PARAMS = [
    {
        "bounds": {
            "max": 10.0,
            "min": 0.01,
        },
        "name": "C",
        "type": "double",
        "transformation": "log",
    },
    {
        "bounds": {
            "max": 1.0,
            "min": 0.0001,
        },
        "name": "gamma",
        "type": "double",
        "transformation": "log",
    },
    {
        "type": "categorical",
        "name": "kernel",
        "categorical_values": [
            {"name": "rbf"},
            {"name": "poly"},
            {"name": "sigmoid"},
        ],
    },
]

The objective function that I'm trying to maximize is the accuracy score of my training set.

How would I accomplish this using MOE's api?


Solution

  • MOE doesn't support categorical variables, it only allows for continuous hyperparameters. To achieve what you are looking for you could treat each categorical instance as a separate problem to optimize and then use the flow outlined in the MOE examples. At the end you could pick the best model from among the tuned models of each kernel type.

    Alternatively you could use SigOpt which was built by the team that created MOE. We build upon and extend a lot of the work that was started in MOE. It provides support for continuous, integer, and categorical parameters as well as many other features and enhancements not found in MOE. We outline this exact example in this blog post, with example code provided in the post. You can run this example within our free trial tier or our free academic tier.