I am working on MLE and I want to optimize my loglikelihood function. I am using the code:
Maximum Likelihood Estimate pseudocode
I have a very specific doubt:
--> I have yObs
and yPred
but I am confused how should I include yObs
and yPred
in my likelihood function as done here:
logLik = -np.sum( stats.norm.logpdf(yObs, loc=yPred, scale=sd) )
My likelihood function only has x
as sample space
and two unknown parameters:
They have used a function called stats.norm.logpdf
but I am not using normal distribution.
Thanks in advance.
Regards
Expanding on the comments.
You have the K pdf K(x,mu, nu)
.
I guess you have a sample of observation yObs
which I'll assume is an array and another array yPred
(note that the example you take this from uses a simple linear regression to obtain yPred and is actually trying to find the regression parameters, rather than the distribution ones, although the answer overall looks weird).
If you are just trying to find the parameters that best fit your sample, then yPred
is useless and you can find your likelihood (as a function of the 2 parameters) as:
logLik = lambda mu, nu: - np.sum(np.log(K(yObs, mu, nu)))
and then minimize over mu, nu
.
If you want to use code like that found in the post you reference you need to change the function like this:
def regressLL(params):
b0 = params[0]
b1 = params[1]
nu = params[2]
yPred = b0 + b1*x
logLik = -np.log( np.prod(K(yObs, mu=yPred, nu=nu)))
return(logLik)
Remember that in the second case your fuction K
must be able to take an array for mu
. I wouldn't suggest the second approach since it uses a different mean for each observation in the sample and in general I don't understand what it is trying to accomplish (looks like it is trying to predict the mean from the observations in some messy way), but it might be a valid approach which I have never seen.