Search code examples
machine-learningnormal-distributionexpectations

Meaning Logarithm Likelihood in Exp. Maximization algorithm


I have implemented the exp. maximization algorithm and it converges and returns the values of mu and sigma correctly, I have checked with various examples.

I have tried to plot the log-likelihood, but I don't know how it will look in the correct form? Here is the equation: enter image description here And my plot, y is the log-likelihood value, x the iteration number. enter image description here

The negative values are very strange, maybe I should normalize the likelihood? What the log-likelihood means in Exp. Maximization?

logLikelihood = 0;
for i = 1 : n
    logTemp = 0;
    for j = 1 : k
        logTemp =  logTemp + p(j) * mvnpdf(x(i,:), mu(j,:), sigma(:,:,j));
    end
    logLikelihood = logLikelihood + log(logTemp);
end
plot(iteration, logLikelihood,'r*');
hold on;

Solution

  • The log probability will always be negative since it's the logarithm of a probability(p<1). Your probabilities (not logprop.) is in the order p=10^-1000 which is normal. For example the most probable sequence of 10000 biased dice rolls will have a really small probability (but the other sequences will be even more unlikely).

    The logpropability is a construct to avoid vanishing probabilities, that is that the program would just round them of to being zero and when it does the program most likely breaks down with for example dividing with some normalization being zero.