Consider a simple problem:
max log(x)
subject to x >= 1e-4
To solve the problem with scipy.optimize.minimize
:
import numpy as np
from scipy.optimize import minimize
from math import log
def func(x):
return log(x[0])
def func_deriv(x):
return np.array([1 / x[0]])
cons = ({'type': 'ineq',
'fun' : lambda x: x[0] - 1e-4,
'jac' : lambda x: np.array([1])})
minimize(func, [1.0], jac=func_deriv, constraints=cons, method='SLSQP')
The script encounters ValueError
because log(x)
is evaluated with negative x
. It seems that the function value is evaluated even if the constraint is not satisfied.
I understand that using bounds
in minimize()
could avoid the problem, but this is just a simplification of my original problem. In my original problem, the constraint x >= 1e-4
cannot be represented easily as bounds of x
, but rather of the form g(x) >= C
, so bounds
wouldn't help.
If we only care about the function value with x > ε
, it is possible to define a safe function extending the domain.
Take the log
function as an example. It is possible to extend log
with another cubic function, while making the bridge point ε smooth:
safe_log(x) = log(x) if x > ε else a * (x - b)**3
To calculate a
and b
, we have to satisfy:
log(ε) = a * (ε - b)**3
1 / ε = 3 * a * (ε - b)**2
Hence the safe_log function:
eps = 1e-3
def safe_log(x):
if x > eps:
return log(x)
logeps = log(eps)
a = 1 / (3 * eps * (3 * logeps * eps)**2)
b = eps * (1 - 3 * logeps)
return a * (x - b)**3
And it looks like this: