Search code examples
pythonmontecarlo

Vectorizing a monte carlo simulation in python


I've recently been working on some code in python to simulate a 2 dimensional U(1) gauge theory using monte carlo methods. Essentially I have an n by n by 2 array (call it Link) of unitary complex numbers (their magnitude is one). I randomly select element of my Link array and propose a random change to the number at that site. I then compute the resulting change in the action that would occur due to that change. I then accept the change with a probability equal to min(1,exp(-dS)), where dS is the change in the action. The code for the iterator is as follows

def iteration(j1,B0):
    global Link
    Staple = np.zeros((2),dtype=complex)
    for i0 in range(0,j1):
        x1 = np.random.randint(0,n)
        y1 = np.random.randint(0,n)
        u1 = np.random.randint(0,1)
        Linkrxp1 = np.roll(Link,-1, axis = 0)
        Linkrxn1 = np.roll(Link, 1, axis = 0)
        Linkrtp1 = np.roll(Link, -1, axis = 1)
        Linkrtn1 = np.roll(Link, 1, axis = 1)
        Linkrxp1tn1 = np.roll(np.roll(Link, -1, axis = 0),1, axis = 1)
        Linkrxn1tp1 = np.roll(np.roll(Link, 1, axis = 0),-1, axis = 1)


        Staple[0] = Linkrxp1[x1,y1,1]*Linkrtp1[x1,y1,0].conj()*Link[x1,y1,1].conj() + Linkrxp1tn1[x1,y1,1].conj()*Linkrtn1[x1,y1,0].conj()*Linkrtn1[x1,y1,1]
        Staple[1] = Linkrtp1[x1,y1,0]*Linkrxp1[x1,y1,1].conj()*Link[x1,y1,0].conj() + Linkrxn1tp1[x1,y1,0].conj()*Linkrxn1[x1,y1,1].conj()*Linkrxn1[x1,y1,0]


        uni = unitary()
        Linkprop = uni*Link[x1,y1,u1]
        dE3 = (Linkprop - Link[x1,y1,u1])*Staple[u1]
        dE1 = B0*np.real(dE3)
        d1 = np.random.binomial(1, np.minimum(np.exp(dE1),1))


        d = np.random.uniform(low=0,high=1)
        if d1 >= d:
            Link[x1,y1,u1] = Linkprop
        else:
            Link[x1,y1,u1] = Link[x1,y1,u1]

At the beginning of program I call a routine called "randomize" to generate K random unitary complex numbers which have small imaginary parts and store them in an array called Cnum of length K. In the same routine I also go through my Link array and set each element to a random unitary complex number. The code is listed below.

def randommatrix():
    global Cnum
    global Link
    for i1 in range(0,K):
        C1 = np.random.normal(0,1)
        Cnum[i1] = np.cos(C1) + 1j*np.sin(C1)
        Cnum[i1+K] = np.cos(C1) - 1j*np.sin(C1)
    for i3,i4 in itertools.product(range(0,n),range(0,n)):
        C2 = np.random.uniform(low=0, high = 2*np.pi)
        Link[i3,i4,0] = np.cos(C2) + 1j*np.sin(C2)
        C2 = np.random.uniform(low=0, high = 2*np.pi)
        Link[i3,i4,1] = np.cos(C2) + 1j*np.sin(C2)

The following routine is used during the iteration routine to get a random complex number with a small imaginary part (by retrieving a random element of the Cnum array we generated earlier).

def unitary():
    I1 = np.random.randint((0),(2*K-1))
    mat = Cnum[I1]
    return mat

Here is an example of what the iteration routine would be used for. I've written a routine called plaquette, which calculates the mean plaquette (real part of a 1 by 1 closed loop of link variables) for a given B0. The iteration routine is being used to generate new field configurations which are independent of previous configurations. After we get a new field configuration we calculate the plaquette for said configuration. We then repeat this process j1 times using a while loop, and at the end we end up with the mean plaquette.

def Plq(j1,B0):
    i5 = 0
    Lboot = np.zeros(j1)
    while i5<j1:
        iteration(25000,B0)
        Linkrxp1 = np.roll(Link,-1, axis = 0)
        Linkrtp1 = np.roll(Link, -1, axis = 1)
        c0 = np.real(Link[:,:,0]*Linkrxp1[:,:,1]*Linkrtp1[:,:,0].conj()*Link[:,:,1].conj())
        i5 = i5 + 1

We need to define some variables before we run anything, so here's the initial variables which I define before defining any routines

K = 20000
n = 50
a = 1.0
Link = np.zeros((n,n,2),dtype = complex)
Cnum = np.zeros((2*K), dtype = complex)

This code works, but it is painfully slow. Is there a way that I can use multiprocessing or something to speed this up?


Solution

  • You should use cython and c data types. Another cython link. It's built for fast computation.

    You could use multiprocessing, potentially, in one of two cases. If you have one object that multiple process would need to share you would need to use Manager (see multiprocessing link), Lock, and Array to share the object between processes. However, there is no guarantee this will result in an increased speed since each process needs to lock the link to guarantee your prediction, assuming the predictions are affected by all elements in the link (if a process modifies an element at the same time another process is making a prediction for an element, the prediction wouldn't be based on the most current information).

    If your predictions do not take into account the state of the other elements, i.e. it only cares about the one element, then you could break your Link array into segments and divvy chunks out to several processes in a process pool, and when done combine the segments back to one array. This would certainly save time, and you wouldn't have to use any additional multiprocessing mechanisms.