I'm doing homework in Monte Carlo course and I'm asked to find a matrix of Markov chain with 6 states namely 0, 1, 2, 3, 4, 5 such that after long enough period of time we have spent time proportional to the numbers 5, 10, 5, 10, 25, 60 in each of the states.
I see that this is the stationary vector that we get if we have the transition matrix. I have to use Metropolis algorithm, but all the explanations and examples I find are based on Metropolis-Hasting algorithm.
Pseudocode of the algorithm that I have:
select x
Loop over repetitions t=1,2...
select y from Nx using density Gx
put h=min(1, f(y)/f(x))
if U ~ U(0, 1) < h then x <- y
end loop
I'm looking for a step by step explanation how to implement the algorithm for the given problem, preferably in python!
The standard approach to computing the stationary distribution of a Markov Chain is the solution of linear equations, e.g. as described here https://stephens999.github.io/fiveMinuteStats/stationary_distribution.html. The solution to your problem is the same in reverse - solve the same equations, except that in your case, you have the stationary distribution but you don't have the transition probabilities/rates.
The problem with this approach, however, is that you may construct a system of linear equations with much more variables than equations. This severely reduces your options regarding the topology of the constructed Markov Chain. Fortunately, you seem to have no constraints on the topology of the constructed Markov Chain, so you can make some compromises. What you can do is disable most transitions, i.e. give them zero probability/rate, and only enable one transition per state. This may produce some kind of a ring topology, but it should ensure that your system of linear equations has a solution.
Consider stationary distribution Pi = ( x = 1/3, y = 1/3, z = 1/3)
Construct your system of linear equations as
Pi(x) = 1/3 = Pr(y,x) * Pi(y)
Pi(y) = 1/3 = Pr(z,y) * Pi(z)
Pi(z) = 1/3 = Pr(x,z) * Pi(x)
In this case a solution is Pr(y,x) = Pr(z,y) = Pr(x,z) = 1 and the obtained Markov Chain just boringly loops from x to z to y and back to x with probability 1. Note that the number of fitting solutions may be infinite (even for the reduced system of linear equations as shown in the example), e.g. in this case the probabilities/rates can be any positive value as long as they are all equal.