Search code examples
matlabsymbolic-math

Calculate roots of characteristic equation of symbolic matrix


I have 3 data matrices A,B,C (all 3x3), with which I use the following code to calculate roots of D(p)X = 0

syms p
D = A + B*p + C*(p^2)
solP = double(solve(det(D)))

From this, I get 6 values for solP. But when I try substituting it back into the symbolic matrix D, as follows, I get non-zero values of det(D) sometimes

for i = 1:6
  p = solP(i)
  det(double(subs(D))  % Should be zero always as we are substituting roots
end

Please help me understand this behaviour.

EDIT :: Example :

A =

   1.0e+11 *

    4.8976    7.0936    6.7970
    4.4559    7.5469    6.5510
    6.4631    2.7603    1.6261

B =

   1.0e+11 *

    3.9223    7.0605    0.4617
    6.5548    0.3183    0.9713
    1.7119    2.7692    8.2346

C =

   1.0e+11 *

    6.9483    0.3445    7.6552
    3.1710    4.3874    7.9520
    9.5022    3.8156    1.8687

solP =

    0.1061 + 0.0000i
    1.5311 + 0.0000i
   -0.3432 + 0.9356i
   -0.3432 - 0.9356i
    0.4228 - 0.5465i
    0.4228 + 0.5465i

det(D) =

   2.2143e+19
  -5.4911e+20
  -8.6415e+19 + 4.5024e+19i
  -8.6415e+19 - 4.5024e+19i
  -1.4547e+19 + 9.1135e+19i
  -1.4547e+19 - 9.1135e+19i

Solution

  • The problem is related to the relative accuracy of floating point values, typically 1e-16.

    The input matrices are of the order 1e+11 - 1e+12, the solution is of the order 1e+0, so the elements of D are also of the order 1e+11 - 1e+12. To calculate a determinant of a 3x3 matrix, one should take products of three matrix elements and add/subtract them. So, each term is of the order of 1e+33 - 1e+36. If you subtract such a values to obtain the determinant, the expected accuracy is in the order of 1e+17 - 1e+20. Indeed, this corresponds with the values you get. Given the relative accuracy, you are not able to reach further to zero.

    Note that if you scale your input matrices, i.e. divide it by 1e+11, the solutions are indeed the same, but the determinants are probably more what you would expect.