Search code examples
pythonmathdeterminants

Math error computing log(det(AA^T)+1) in python


I am trying to estimate the mean value of log(det(AAT)+1) in Python. My simple code works fine until I get to 17×17 matrices at which point it gives me a math error. Here is the code:

iter = 10000
for n in xrange(1,20):
    h = n
    dets = []
    for _ in xrange(iter):
        A = (np.random.randint(2, size=(h,n)))*2-1
        detA_Atranspose = np.linalg.det(np.dot(A, A.transpose()))
        try:
            logdetA_Atranspose = math.log(detA_Atranspose+1,2)
        except ValueError:
            print "Ooops!", n,detA_Atranspose
        dets.append(logdetA_Atranspose)
    print np.mean(dets)

A is supposed to be a matrix with elements that are either -1 or 1.

What am I doing wrong and how can it be fixed? What is special about 17?


Solution

  • For the formula in the title ( previously logdet(AA^T) ):

    det(AA^T) for some random As can simply be 0. The function would then fail because computing log(0) is invalid.

    Note that in theory det(AA^T) cannot be negative, since AA^T is a positive semi-definite matrix (which means that all eigenvalues are non-negative and implies that det >= 0).

    For the formula in the code ( logdet(1+AA^T) )

    You should probably use numpy.linalg.slogdet() and compute slogdet(1+A.dot(A.T))

    From its documentation:

    "Compute the sign and (natural) logarithm of the determinant of an array.

    If an array has a very small or very large determinant, then a call to det may overflow or underflow. This routine is more robust against such issues, because it computes the logarithm of the determinant rather than the determinant itself."