Search code examples
pythonoptimizationmathematical-optimizationgenetic-algorithmgenetic-programming

Parameter optimization with weights


For this function :

import numpy as np

def my_function(param1 , param2 , param3 , param4) : 
    return param1 + 3*param2 + 5*param3 + np.power(5 , 3) + np.sqrt(param4)

print(my_function(1,2,3,4))

This prints 134.0

How to return 100 instead of 134.0 or as close a value to 6 as possible with following conditions of my_function parameters : param1 must be in range 10-20, param2 must be in range 20-30, param3 must be in range 30-40, param4 must be in range 40-50

I'm not asking for specific solutions to this problem but what domain does it fall into ? Reading https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html & Parameter Optimization in Python suggest this is possible with out of box solutions (in low dimensions) . Can genetic programming be applied to this problem ?


Solution

  • Basically, you want to minimize the error between an estimate and the real function.

    EDIT: The obvious choice here is to use gradient descent on 4 parameters. If you do not want to do that and ask more like a pragmatic solution, here it is.

    The main problem here is there are 4 parameters. To solve this problem, you can do this:

    1. Fix any 3 parameters, leave the last one alone. We will try to find this value.
    2. Find the inverse function, and solve for it either explicitly or using a solver (which may be a numerical method like Newton-Raphson or Brent method)

    I will describe a process to demonstrate this idea. We will use scipy's scalar_minimizer which employs Brent method.

    For the sake of discussion, let's keep your function consist of 2 parameters, and let's assume your function is:

    def f(p1, p2):
        return p1 + np.sqrt(p2)
    

    you are basically asking how to find and p1, p2 values such that f(p1, p2) = 100. Assuming ranges are following:

    • ranges for p1: 10-20
    • ranges for p2: 20-30

    Let's fix p1 to 10 (you are free to fix to anything in this range). Now the function becomes

    def g(p2):
        return 10 + np.sqrt(p2)
    

    We want this to be as close too 100 possible, so let's create an error function which measure how far our estimate is away from 100.

    def error(p2):
        return 100 - (10 + np.sqrt(p2)) # we want to minimize this
    

    You can find value to minimize this error so that you can be as close to as possible 100 through

    from scipy import optimize
    optimize.minimize_scalar(error, bounds = (10,20), method = "bounded")   
    

    which gives a value of x = 19.9 as the value that minimizes the error.