Search code examples
pythonscipyscipy-optimize-minimize

Minimize function with scipy


I'm trying to do one task, but I just can't figure it out.

This is my function:

1/(x**1/n) + 1/(y**1/n) + 1/(z**1/n) - 1

I want that sum to be as close to 1 as possible.

And these are my input variables (x,y,z):

test = np.array([1.42, 5.29, 7.75])

So n is the only decision variable.

To summarize:

I have a situation like this right now:

1/(1.42**1/1) + 1/(5.29**1/1) + 1/(7.75**1/1) = 1.02229

And I want to get the following:

1/(1.42^(1/0.972782944446024)) + 1/(5.29^(1/0.972782944446024)) + 1/(7.75^(1/0.972782944446024)) = 0.999625

So far I have roughly nothing, and any help is welcome.

import numpy as np
from scipy.optimize import minimize

def objectiv(xyz):
    x = xyz[0]
    y = xyz[1]
    z = xyz[2]
    n = 1
    return 1/(x**(1/n)) + 1/(y**(1/n)) + 1/(z**(1/n))

test = np.array([1.42, 5.29, 7.75])

print(objectiv(test))

OUTPUT: 1.0222935270013889

How to properly define a constraint?

def conconstraint(xyz):
x = xyz[0]
y = xyz[1]
z = xyz[2]
n = 1
return 1/(x**(1/n)) + 1/(y**(1/n)) + 1/(z**(1/n)) - 1

And it is not at all clear to me how and what to do with n?

EDIT

I managed to do the following:

def objective(n,*args):
    x = odds[0]
    y = odds[1]
    z = odds[2]
    return abs((1/(x**(1/n)) + 1/(y**(1/n)) + 1/(z**(1/n))) - 1)

odds = [1.42,5.29,7.75]

solve = minimize(objective,1.0,args=(odds))

And my output:

fun: -0.9999999931706812
x: array([0.01864994])

And really when put in the formula:

(1/(1.42^(1/0.01864994)) + 1/(5.29^(1/0.01864994)) + 1/(7.75^(1/0.01864994))) -1 = -0.999999993171

Unfortunately I need a positive 1 and I have no idea what to change.


Solution

  • We want to find n that gets our result for a fixed x, y, and z as close as possible to 1. minimize tries to get the lowest possible value for something, without negative bound; -3 is better than -2, and so on.

    So what we actually want is called least-squares optimization. Similar idea, though. This documentation is a bit hard to understand, so I'll try to clarify:

    All these optimization functions have a common design where you pass in a callable that takes at least one parameter, the one you want to optimize for (in your case, n). Then you can have it take more parameters, whose values will be fixed according to what you pass in.

    In your case, you want to be able to solve the optimization problem for different values of x, y and z. So you make your callback accept n, x, y, and z, and pass the x, y, and z values to use when you call scipy.optimize.least_squares. You pass these using the args keyword argument (notice that it is not *args). We can also supply an initial guess of 1 for the n value, which the algorithm will refine.

    The rest is customization that is not relevant for our purposes.

    So, first let us make the callback:

    def objective(n, x, y, z):
        return 1/(x**(1/n)) + 1/(y**(1/n)) + 1/(z**(1/n))
    

    Now our call looks like:

    best_n = least_squares(objective, 1.0, args=np.array([1.42, 5.29, 7.75]))
    

    (You can call minimize the same way, and it will instead look for an n value to make the objective function return as low a value as possible. If I am thinking clearly: the guess for n should trend towards zero, making the denominators increase without bound, making the sum of the reciprocals go towards zero; negative values are not possible. However, it will stop when it gets close to zero, according to the default values for ftol, xtol and gtol. To understand this part properly is beyond the scope of this answer; please try on math.stackexchange.com.)