Search code examples
pythonscipyrootmathematical-optimization

scipy.optimize.fsolve convergence bug?


Here's the code

import scipy as sc
import scipy.optimize as sco

def g(rho):
    return 0.5 * (rho**2 + rho) * sc.exp(-rho)

p = 0.01017036
guess = 1.5879245860401234
sol = sco.fsolve(lambda rho: g(rho) - p, guess, full_output=True)

print g(sol[0]) - p
print sol

The output is

[ 0.40970908]
(array([ 1.58792459]), {'qtf': array([-0.40970908]), 'nfev': 4, 'r': array([  9.52007670e+26]), 'fjac': array([[-1.]]), 'fvec': array([ 0.40970908])}, 1, 'The solution converged.')

It says it converged but it clearly didn't since I should get g(sol[0]) - p much closer to zero than 0.4

I think there's a problem with the convergence test used to raise errors. I know I can change the guess and obtain the correct solution but this is not the case (I have to find thousands of roots and I'm getting a lot of these false roots), the thing is that the error catching algorithm is not being reliable. Am I doing something wrong??

Thanks in advance.


Solution

  • You are minimizing a target function, instead of finding a root, you should use optimize.fmin instead:

    import scipy as sc
    import scipy.optimize as sco
    
    def g(rho):
        return 0.5 * (rho**2 + rho) * sc.exp(-rho)
    
    p = 0.01017036
    guess = 1.5879245860401234
    sol = sco.fmin(lambda rho: (g(rho)-p)**2, guess)
    
    print sol
    Optimization terminated successfully.
             Current function value: 0.000000
             Iterations: 23
             Function evaluations: 46
    [ 8.2240227] 
    

    It might also have to do with a bad guess:

    p = 0.01017036
    guess = 1.5879245860401234
    print sco.fsolve(f, 10, fprime=gp, args=(p,), full_output=1)
    print sco.fsolve(lambda rho: g(rho) - p, 10, full_output=True)
    
    print '\n==================bad_guess====================='
    print sco.fsolve(f, guess, fprime=gp, args=(p,), full_output=1)
    print sco.fsolve(lambda rho: g(rho) - p, guess, full_output=True)
    #(array([ 8.22399478]), {'fvec': array([ -1.90472638e-15]), 'qtf': array([  1.87372265e-10]), 'nfev': 10, 'r': array([ 0.00783117]), 'fjac': array([[-1.]]), 'njev': 1}, 1, 'The solution converged.')
    #(array([ 8.22399478]), {'qtf': array([  1.87372918e-10]), 'nfev': 11, 'r': array([ 0.00783117]), 'fjac': array([[-1.]]), 'fvec': array([ -1.90472638e-15])}, 1, 'The solution converged.')
    #
    #==================bad_guess=====================
    #(array([ 1.58792459]), {'fvec': array([ 0.40970908]), 'qtf': array([-0.40970908]), 'nfev': 3, 'r': array([  9.52013062e+26]), 'fjac': array([[-1.]]), 'njev': 1}, 1, 'The solution converged.')
    #(array([ 1.58792459]), {'qtf': array([-0.40970908]), 'nfev': 4, 'r': array([  9.52007670e+26]), 'fjac': array([[-1.]]), 'fvec': array([ 0.40970908])}, 1, 'The solution converged.')
    

    Probably the convergence is determined by Jacobian, and your guess somehow causes the Jacobian to land in a strange place. Notice that the Jacobian of rho=8.22399478 and rho=1.58792459 is the same. and 1.58792459 is exact the starting guess. It appears that fsolve never move away from the initial guess for some reason.