The following is a self-contained example illustrating my problem.
using Optim
χI = 3
ψI = 0.5
ϕI(z) = z^-ψI
λ = 1.0532733
V0 = 0.8522423425
zE = 0.5986
wRD = 0.72166623555
objective1(z) = -(z * χI * ϕI(z + zE) * (λ-1) * V0 - z * ( wRD ))
objective2(z) = -1 * objective1(z)
lower = 0.01
upper = Inf
plot(0:0.01:0.1,objective1,title = "objective1")
png("/home/nico/Desktop/objective1.png")
plot(0:0.01:0.1,objective2, title = "objective2")
png("/home/nico/Desktop/objective2.png")
results1 = optimize(objective1,lower,upper)
results2 = optimize(objective2,lower,upper)
The plots are
and
Both objective1(z)
and objective2(z)
return NaN
at z = 0
and finite values everywhere else, with an optimum for some z > 0
.
However the output of results1
is
Results of Optimization Algorithm
* Algorithm: Brent's Method
* Search Interval: [0.010000, Inf]
* Minimizer: Inf
* Minimum: NaN
* Iterations: 1000
* Convergence: max(|x - x_upper|, |x - x_lower|) <= 2*(1.5e-08*|x|+2.2e-16): false
* Objective Function Calls: 1001
and the output of results2
is
Results of Optimization Algorithm
* Algorithm: Brent's Method
* Search Interval: [0.010000, Inf]
* Minimizer: Inf
* Minimum: NaN
* Iterations: 1000
* Convergence: max(|x - x_upper|, |x - x_lower|) <= 2*(1.5e-08*|x|+2.2e-16): false
* Objective Function Calls: 1001
I believe the problem is with upper = Inf
. If I change that to upper = 100
, for example, the output of results1
is
Results of Optimization Algorithm
* Algorithm: Brent's Method
* Search Interval: [0.010000, 100.000000]
* Minimizer: 1.000000e-02
* Minimum: 5.470728e-03
* Iterations: 55
* Convergence: max(|x - x_upper|, |x - x_lower|) <= 2*(1.5e-08*|x|+2.2e-16): true
* Objective Function Calls: 56
and results2
returns
Results of Optimization Algorithm
* Algorithm: Brent's Method
* Search Interval: [0.010000, 100.000000]
* Minimizer: 1.000000e+02
* Minimum: -7.080863e+01
* Iterations: 36
* Convergence: max(|x - x_upper|, |x - x_lower|) <= 2*(1.5e-08*|x|+2.2e-16): true
* Objective Function Calls: 37
as expected.
As you note in your question - you use bounded optimization algorithm but you pass an unbounded interval to it.
Citing the documentation (https://julianlsolvers.github.io/Optim.jl/latest/#user/minimization/), which is precise about it the optimize
function is for Minimizing a univariate function on a bounded interval.
To give a more detail about the problem you encounter. The optimize
method searches points inside your interval. There are two algorithms implemented: Brent (the default) and Golden Section. The point they first check is:
new_minimizer = x_lower + golden_ratio*(x_upper-x_lower)
and you see that it new_minimizer
will be Inf
. So the optimization routine is not even able to find a valid interior point. Then you see that your functions return NaN
for Inf
argument:
julia> objective1(Inf)
NaN
julia> objective2(Inf)
NaN
This combined gives you explanation why the minimum found is Inf
and the objective is NaN
in the produced output.
The second point is that you should remember that Float64
numbers have a finite precision, so you should choose the interval so as to make sure that the method is actually able to accurately evaluate the objective in it. For example even this fails:
julia> optimize(objective1, 0.0001, 1.0e308)
Results of Optimization Algorithm
* Algorithm: Brent's Method
* Search Interval: [0.000100, 100000000000000001097906362944045541740492309677311846336810682903157585404911491537163328978494688899061249669721172515611590283743140088328307009198146046031271664502933027185697489699588559043338384466165001178426897626212945177628091195786707458122783970171784415105291802893207873272974885715430223118336.000000]
* Minimizer: 1.000005e+308
* Minimum: -Inf
* Iterations: 1000
* Convergence: max(|x - x_upper|, |x - x_lower|) <= 2*(1.5e-08*|x|+2.2e-16): false
* Objective Function Calls: 1001
The reason is that objective1
actually starts to behave in a numerically unstable way for very large arguments (because it has a finite precision), see:
julia> objective1(1.0e307)
7.2166623555e306
julia> objective1(1.0e308)
-Inf
The last point is that actually Optimize
tells you that something went wrong and you should not rely on the results as:
julia> results1.converged
false
julia> results2.converged
false
For the initial specification of the problem (with Inf
).