Search code examples
numpyscipysparse-matrixlinear-algebraequation-solving

scipy.sparse.linalg: what's the difference between splu and factorized?


What's the difference between using

 scipy.sparse.linalg.factorized(A)

and

 scipy.sparse.linalg.splu(A)

Both of them return objects with .solve(rhs) method and for both it's said in the documentation that they use LU decomposition. I'd like to know the difference in performance for both of them.

More specificly, I'm writing a python/numpy/scipy app that implements dynamic FEM model. I need to solve an equation Au = f on each timestep. A is sparse and rather large, but doesn't depend on timestep, so I'd like to invest some time beforehand to make iterations faster (there may be thousands of them). I tried using scipy.sparse.linalg.inv(A), but it threw memory exceptions when the size of matrix was large. I used scipy.linalg.spsolve on each step until recently, and now am thinking on using some sort of decomposition for better performance. So if you have other suggestions aside from LU, feel free to propose!


Solution

  • They should both work well for your problem, assuming that A does not change with each time step.

    scipy.sparse.linalg.inv(A) will return a dense matrix that is the same size as A, so it's no wonder it's throwing memory exceptions.

    scipy.linalg.solve is also a dense linear solver, which isn't what you want.

    Assuming A is sparse, to solve Au=f and you only want to solve Au=f once, you could use scipy.sparse.linalg.spsolve. For example

    u = spsolve(A, f)
    

    If you want to speed things up dramatically for subsequent solves, you would instead use scipy.sparse.linalg.factorized or scipy.sparse.linalg.splu. For example

    A_inv = splu(A)
    for t in range(iterations):
        u_t = A_inv.solve(f_t)
    

    or

    A_solve = factorized(A)
    for t in range(iterations):
        u_t = A_solve(f_t)
    

    They should both be comparable in speed, and much faster than the previous options.

    As @sascha said, you will need to dig into the documentation to see the differences between splu and factorize. But, you can use 'umfpack' instead of the default 'superLU' if you have it installed and set up correctly. I think umfpack will be faster in most cases. Keep in mind that if your matrix A is too large or has too many non-zeros, an LU decomposition / direct solver may take too much memory on your system. In this case, you might be stuck with using an iterative solver such as this. Unfortunately, you wont be able to reuse the solve of A at each time step, but you might be able to find a good preconditioner for A (approximation to inv(A)) to feed the solver to speed it up.