Search code examples
numpyoptimizationmathematical-optimizationexpectation-maximization

Bounded Optimization using scipy Libraries


Let us suppose a loglikelihood function f(x, y, z) = prob(k0)* log((1-x)^(1-y)) + prob(k1)*log((1-x)^(1-z)) and there exists constraints such that possible values of x, y and z should lie between 0 and 1. The goal is to minimize the function and return the values for x, y and z at that minima.

I tried using the conjugate gradient method scipy library.

   params = sp.optimize.minimize(fun=f, x0=initial_params, args=(data,), method='CG',
                                     jac=df,options={'gtol': 1e-05,'disp': True})

The method fails in the first iteration.

Warning: Desired error not necessarily achieved due to precision loss.

Do I need to provide the Hessian matrix as there are more than two variables?

I also tried Nelder-mead method, but it is taking a lot of time.

params = sp.optimize.minimize(fun=f, x0=initial_params, method='Nelder-Mead', args=(data,),  options={'disp': True})

Also, more importantly, the method does not consider the bounds on the variables and returns values of x, y and z not in bound in some cases. I

s there any other methods in scipy or any other package which considers such type of bounded optimization. Please help.


Solution

  • According to the scipy.optimize.minimize documentation (look for the "bounds" parameter), bounds are only supported by L-BFGS-B, TNC, SLSQP and trust-constr methods. L-BFGS-B should probably be your first choice, it typically works very well on most problems.