Search code examples
juliajulia-jumpautomatic-differentiation

Julia JuMP making sure nonlinear objective function has correct function signatures so that autodifferentiate works properly?


so I wrote a minimum example to show what I'm trying to do. Basically I want to solve a optimization problem with multiple variables. When I try to do this in JuMP I was having issues with my function obj not being able to take a forwardDiff object.

I looked here: and it seemed to do with the function signature :Restricting function signatures while using ForwardDiff in Julia . I did this in my obj function, and for insurance did it in my sub-function as well, but I still get the error

 LoadError: MethodError: no method matching Float64(::ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#110#112"{typeof(my_fun)},Float64},Float64,2})
Closest candidates are:
  Float64(::Real, ::RoundingMode) where T<:AbstractFloat at rounding.jl:200
  Float64(::T) where T<:Number at boot.jl:715
  Float64(::Int8) at float.jl:60

This still does not work. I feel like I have the bulk of the code correct, just some weird of type thing going on that I have to clear up so autodifferentiate works...

Any suggestions?

using JuMP
using Ipopt
using LinearAlgebra

function obj(x::Array{<:Real,1})
    println(x)
    x1 = x[1]
    x2 = x[2]
    eye= Matrix{Float64}(I, 4, 4)
    obj_val = tr(eye-kron(mat_fun(x1),mat_fun(x2)))
   println(obj_val)
   return obj_val
end

function mat_fun(var::T) where {T<:Real}
    eye= Matrix{Float64}(I, 2, 2)
    eye[2,2]=var
    return eye

end

m = Model(Ipopt.Optimizer)

my_fun(x...) = obj(collect(x))

@variable(m, 0<=x[1:2]<=2.0*pi)
register(m, :my_fun, 2, my_fun; autodiff = true)
@NLobjective(m, Min, my_fun(x...))

optimize!(m)

# retrieve the objective value, corresponding x values and the status
println(JuMP.value.(x))
println(JuMP.objective_value(m))
println(JuMP.termination_status(m))

Solution

  • Use instead

    function obj(x::Vector{T}) where {T}
        println(x)
        x1 = x[1]
        x2 = x[2]
        eye= Matrix{T}(I, 4, 4)
        obj_val = tr(eye-kron(mat_fun(x1),mat_fun(x2)))
       println(obj_val)
       return obj_val
    end
    
    function mat_fun(var::T) where {T}
        eye= Matrix{T}(I, 2, 2)
        eye[2,2]=var
        return eye
    end
    

    Essentially, anywhere you see Float64, replace it by the type in the incoming argument.