Search code examples
roptimizationmlelog-likelihood

How to add constraint into loglikelihood function?


I have a time series model (INGARCH):

lambda_t = alpha0 + alpha1*(x_(t-1)) + beta1*(lambda_(t-1))

X_t ~ poisson (lambda_t)

where t is the length of observation or data, alpha0, alpha1 and beta1 are the parameters.

X_t is the series of data, lambda_t is the series of mean.

This model has the condition of alpha1 + beta1 < 1.

In my estimation, I would like to add in the condition of alpha1 + beta1 <1 in my code, I add a while loop in the log-likelihood function, but the loop cannot stop.

What could I do to solve this problem? Is there any other way to add a constraint of alpha1 + beta1 < 1 without using while loop?

Below are my code:

ll <- function(par) {
  h.new  = rep(0,n)
  #par[1] is alpha0 
  #par[2] is alpha1
  #par[3] is beta1
  while(par[2] + par[3] < 1){
  for (i in 2:n) {
    h.new[i] <- par[1] + par[2] * dat[i-1] + par[3] * h.new[i-1]

  }
  -sum(dpois(dat, h.new, log=TRUE))
  }
}

#simply generate a dataset as I have not found a suitable real dataset to fit in
set.seed(77)
n=400
dat <- rpois(n,36)

nlminb(start = c(0.1,0.1,0.1), lower = 1e-6, ll)

Solution

  • You do not change par at all inside the while. In particular, if you would have printed par[1] and par[2] in the while you would see you are endlessly printing the original values, 0.1 - hence you are stuck in the while for ever.

    par is a single, unchanging object in each call from nlminb. You just have to make sure if par is bad, you return something not minimal, so nlminb does not keep searching in that direction:

    ll <- function(par) {       
        #If alpha + beta > 1, this is terrible and return an infinite score
        #It may be better to throw an error if you get NaN values! The if will
        #fail anyway, but if you want to power through add checks:
        if( is.nan(par[2]) || is.nan(par[3]) || par[2]+par[3]>1) return(Inf)
        h.new  = rep(0,n)
        #remove while
        for (i in 2:n) {
            h.new[i] <- par[1] + par[2] * dat[i-1] + par[3] * h.new[i-1]
        }
        -sum(dpois(dat, h.new, log=TRUE))
    }
    

    The algorithm nlminb (or any minimization function) very roughly goes:

    1. Set parameters to initial guess
    2. Send parameters to the objective functions
    3. Guess new parameters:

      a. if the score did not improve much, return minimized guess

      b. if the score is good, keep searching in this direction

      c. else, search in some other direction

    4. Go back to (2) with new parameters

    Note you have to return a score for each set of parameters, you do not iterate them in the objective function.