I need some help for this problem.
I want to solve Ax = b
,
where the A is n x n (square matrix), b is n x 1 matrix
.
But the A matrix has this property : + Ill conditioned (K >> 1) (maybe larger than 10 ^ 8) + Symmetric positive definite (because it is covariance matrix)
I already tried Jacobi method, but unfortunately it is very slow to converge. I avoid to using Cholesky decomposition.
And I already tried Conjugate Gradient, but unfortunately if the Condition number from matrix A is too large, it can't become convergent.
Update : I need a method that can be run in parallel framework (like MPI). So I can't use Gauss-seidal which need x[i] in the current iteration.
What kind of method that I can use for this kind of problem ? Thanks :)
I'm going to take a guess that your troubles arise from an inaccurate computation of a matrix-vector product. (I've never seen conjugate gradient completely fail to reduce the residual unless the matrix-vector product was poor. The first iteration after a restart just does steepest descent.)
You might try running conjugate gradient again, but using extended precision or Kahan summation or something when computing the matrix-vector product.
Alternatively, if your matrix has some known structure, you might try to find a different way of writing the matrix-vector product that reduces roundoff in the computed result. If I could see your matrix, I might be able to give more concrete suggestions here.