Search code examples
julialapackeigenvalue

Julia: Linking LAPACK 2.0 on Linux


I am using eigs() function in Julia for computing eigenvalues and eigenvectors. Results are non deterministic and often full of 0.0. Temporary solution is to link LAPACK 2.0.

Any idea how to do it on Linux Ubuntu? So far I am not able to link it and I do not how complex Linux administration skills so It will be good if someone could post some guide for how to link it correctly.

Thanks a lot.

Edit:

I wanted to add results but I noticed one flaw in code. I was using matrix = sparse(map(collect,zip([triple(e,"weight") for e in edges(g)]...))..., num_vertices(g), num_vertices(g)). It answer from you to one of my questions. It works ok when vertices are indexed from 1. But my vertices have random indexes due to reading them from file. So I changed num_vertices to be equal to largest index. But I do not noticed that it was doing for example computations considering 1000 vertices when vertex with max index was 1000 although whole graph could consists of 3 verts 1, 10, 1000 for example. Any idea how to fix it ?

Edit 2:

#Content of matrix = matrix+matrix'
[2, 1]  =  10.0
[3, 1]  =  14.0
[1, 2]  =  10.0
[3, 2]  =  10.0
[5, 2]  =  2.0
[1, 3]  =  14.0
[2, 3]  =  10.0
[4, 3]  =  20.0
[5, 3]  =  20.0
[3, 4]  =  20.0
[2, 5]  =  2.0
[3, 5]  =  20.0
[6, 5]  =  10.0
[5, 6]  =  10.0   

matrix = matrix+matrix'
(d, v) = eigs(matrix, nev=1, which=:LR, maxiter=1)

5 executions of code above:

[-0.3483956604402672
 -0.3084333257587648
 -0.6697046040724708
 -0.37450798643794125
 -0.4249810113292739
 -0.11882760090004019]

[0.3483956604402674
 0.308433325758765
 0.6697046040724703
 0.3745079864379416
 0.424981011329274
 0.11882760090004027]

[-0.3483956604402673
 -0.308433325758765
 -0.669704604072471
 -0.37450798643794114
 -0.4249810113292739
 -0.1188276009000403]

[0.34839566044026726
 0.30843332575876503
 0.6697046040724703
 0.37450798643794114
 0.4249810113292739
 0.11882760090004038]

[0.34839566044026715
 0.30843332575876503
 0.6697046040724708
 0.3745079864379412
 0.4249810113292738
 0.11882760090004038]

Solution

  • The algorithm is indeed non-deterministic (as is obvious in the example in the question). But, there are two kinds of non-determinism in the answers:

    1. the complete sign reversals of the eigenvector.
    2. small accuracy errors.

    If a vector is an eigenvector, so is every scalar multiple of it (mathematically, the eigenvector is part of a subspace of eigenvectors belonging to an eigenvalue). Thus, if v is an eigenvector, so is λv. When λ = -1 this is the sign reversal. But 2v is also an eigenvector. The eigs function normalizes the vectors to norm 1, so the only freedom left is this sign reversal. To solve this non-determinism, you can choose a sign for the first non-zero coordinate of the vector (say, positive) and multiple the eigenvector to make it so. In code:

    v = v*sign(v[findfirst(v)])
    

    Regarding the second non-determinism source (inaccuracies), it is important to note that the true eigenvalues and eigenvectors are often real numbers which cannot be accurately represented by Float64, thus the return values are always off. If the level of accuracy needed is low enough, rounding the values deterministically should make the resulting approximation the same. If this is not clear, consider an algorithm for calculating sqrt(2). It may be non-deterministic and return 1.4142135623730951 and sometimes 1.4142135623730949, but rounding to 5 decimal places would always yield 1.41421.

    The above should provide a guide to making the results more deterministic. But consider:

    1. If there are multiple eigenvalues with the same value, the subspace of eigenvectors is more than 1 dimensional and there is more freedom to choose an eigenvector. This could make finding a deterministic vector (or vectors) to span this space more intricate.

    2. Does the application really require this determinism?

    (Thanks for the code bits - they do help. Even better when they can be quickly cut-and-pasted).