Search code examples
machine-learningmixture-modelexpectation-maximization

Handling zero rows/columns in covariance matrix during em-algorithm


I tried to implement GMMs but I have a few problems during the em-algorithm.

Let's say I've got 3D Samples (stat1, stat2, stat3) which I use to train the GMMs.

One of my training sets for one of the GMMs has in nearly every sample a "0" for stat1. During training I get really small Numbers (like "1.4456539880060609E-124") in the first row and column of the covariance matrix which leads in the next iteration of the EM-Algorithm to 0.0 in the first row and column.

I get something like this:

0.0 0.0 0.0
0.0 5.0 6.0
0.0 2.0 1.0

I need the inverse covariance matrix to calculate the density but since one column is zero I can't do this.

I thought about falling back to the old covariance matrix (and mean) or to replace every 0 with a really small number.

Or is there a another simple solution to this problem?


Solution

  • Simply your data lies in degenerated subspace of your actual input space, and GMM is not well suited in most generic form for such setting. THe problem is that empirical covariance estimator that you use simply fail for such data (as you said - you cannot inverse it). What you usually do? You chenge covariance estimator to the constrained/regularized ones, which contain:

    • Constant-based shrinking, thus instead of using Sigma = Cov(X) you do Sigma = Cov(X) + eps * I, where eps is prefedefined small constant, and I is identity matrix. Consequently you never have a zero values on the diagonal, and it is easy to prove that for reasonable epsilon, this will be inversible
    • Nicely fitted shrinking, like Oracle Covariance Estimator or Ledoit-Wolf Covariance Estimator which find best epsilon based on the data itself.
    • Constrain your gaussians to for example spherical family, thus N(m, sigma I), where sigma = avg_i( cov( X[:, i] ) is the mean covariance per dimension. This limits you to spherical gaussians, and also solves the above issue

    There are many more solutions possible, but all based on the same thing - chenge covariance estimator in such a way, that you have a guarantee of invertability.