I'm trying to solve numerically a system of non-linear equations:
def func(p):
x, f = p
return (math.exp(-x/O)-f,
L - L*((1 - math.exp(-x/O))**W) - x*math.exp(-x/O))
and I'm currently using scipy.fsolve for it in the following way:
x, f = fsolve(func, (10, 0.2))
I'm sure my way of using fsolve is correct: it works perfectly for certain range of parameters. However, it totally fails for another (e.g., O=8, L=1.67, W=8) with the following error:
RuntimeWarning: The number of calls to function has reached maxfev = 600.
I'm sure it is well solvable -- there are (at least) matlab tools, which do it. Is there anything I'm doing wrong or is there any other solver I could try?
Thanks in advance for any hints!
Nonlinear optimization and root-finding is unfortunately sensitive to the choice of the starting point.
In [19]: def func(p):
x, f = p
return [np.exp(-x/O) -f, L - L*((1 - np.exp(-x/O))**W) - x*np.exp(-x/O)]
....:
In [20]: O, L, W = 8, 1.67, 8
In [21]: res = fsolve(func, [1, 1.2])
In [22]: res
Out[22]: array([ 2.19804447, 0.75975782])
In [23]: func(res)
Out[23]: [-2.2204460492503131e-16, -4.4408920985006262e-15]
Notice that your parametrization seems redundant: The position of the root does not depend on you can parameterize everything in terms of L
, andexp(-x/O)
which will likely make it much easier to the solver.
EDIT: Define y = exp(-x/O)
. Then your first equation tells you that the root-finding is actually one-dimensional (meaning that you can use e.g. brentq
which is more robust). But you can use fsolve as well:
In [43]: ry = fsolve(lambda y: 1 - (1-y)**W + (O/L)*y*np.log(y), 0.4)
In [44]: ry
Out[44]: array([ 0.75975782])