I'm really struggling with understanding MLE calculations in R.
If I have a random sample of size 6 from the exp(λ)
distribution results in observations:
x <- c(1.636, 0.374, 0.534, 3.015, 0.932, 0.179)
I calculated out the MLE as follows
mean(x)
and got 1.111667 (I'm not 100% certain I did this part right).
But when I try to code numeric calculation using R I either get errors or an answer that doesn't match.
lik <- function(lam) prod(dexp(x)) # likelihood function
nlik <- function(lam) -lik(lam) # negative-likelihood function
optimize(nlik, x)
gives me
#$minimum
#[1] 3.014928
#
#$objective
#[1] -0.001268399
Originally I had
lik <-function(lam) prod(dexp(x, lambda=lam)) # likelihood function
nlik <- function(lam) -lik(lam) # negative-likelihood function
optim(par=1, nlik) # minimize nlik with starting parameter value=1
But I kept getting
#Error in dexp(x, lambda = lam) :
# unused argument (lambda = lam)
#In addition: Warning message:
#In optim(par = 1, nlik) :
# one-dimensional optimization by Nelder-Mead is unreliable:
#use "Brent" or optimize() directly
So here is your observation vector
x <- c(1.636, 0.374, 0.534, 3.015, 0.932, 0.179)
I'm not sure why you minimize negative likelihood directly; often we work with negative log likelihood.
nllik <- function (lambda, obs) -sum(dexp(obs, lambda, log = TRUE))
When using optimize
, set a lower and upper bound:
optimize(nllik, lower = 0, upper = 10, obs = x)
#$minimum
#[1] 0.8995461
#
#$objective
#[1] 6.635162
This is not too far away from sample mean: 1.11, given that you only have 6 observations which is insufficient for a close estimate anyway.
It is pretty sufficient to use optimize
here, as you work with univariate optimization. If you want to use optim
, set method = "Brent"
. You can read Error in optim(): searching for global minimum for a univariate function for more.