As I was bored, I checked the stationary theorem regrading the transition matrix of a MARKOV chain. So I defined a simple one, e.g.:
>> T=[0.5 0.5 0; 0.5 0 0.5; 0.2 0.4 0.4];
The stationary theorem says, if you calculate the transitionmatrix to a very high power, then you'll get to the stationary matrix with it's principal components at the rows. So lets try:
>> T^1000
ans =
0.4211 0.3158 0.2632
0.4211 0.3158 0.2632
0.4211 0.3158 0.2632
Everything good so far. Lets go on:
>> T^1000000000
ans =
0.4211 0.3158 0.2632
0.4211 0.3158 0.2632
0.4211 0.3158 0.2632
OK.... fine. Let's take just one zero more:
>> T^10000000000
ans =
0.4210 0.3158 0.2632
0.4210 0.3158 0.2632
0.4210 0.3158 0.2632
??? Something has changed.... Let's try more:
>> T^10000000000000000
ans =
1.0e-03 *
0.5387 0.4040 0.3367
0.5387 0.4040 0.3367
0.5387 0.4040 0.3367
What is going on here, even the sum of rows is no longer 1
>> T^10000000000000000000
ans =
0 0 0
0 0 0
0 0 0
Aaaand it's gone.
I tried this with R2011a. I guess there is some fancy algorithm in the backround, which approximates this high power of matrices. But how can this happen? Which algorithm performs that fast on such calculations, and makes this kind of missbehave in extreme situations like this?
It may use Eigen Decomposition so that such a speed is possible
The real computational load is doing this decomposition, then by calculating the powers of eigen values the power can be easily calculated. It also accounts for why partititioning the calculation into smaller powers like 2 takes much more time.