Search code examples
pythonbayesianpymc

Normalizing Bayesian IRT Model in pymc


The best example I could find of how to estimate this type of IRT Bayesian model using MCMC in Python was this example. Below is a reproducible version of the code that I got to run. My understanding is that to identify the model the ability parameter theta is constrained to be a normally distributed with mean 0, standard deviation 1, which I thought was done with this line in the code below:

# theta (proficiency params) are sampled from a normal distribution
theta = Normal("theta", mu=0, tau=1, value=theta_initial, observed= generating)

However, when I run the code, the posterior mean for theta is something like 1.85e18 with an even larger standard deviation, i.e. not mean zero and standard deviation 1. Why am I getting this error and how do I make sure that theta is normalized to mean 0, sd 1 after each iteration?

#from pylab import * #Pylab will not install with pip so I just loaded numpy itself
from numpy import *
import numpy
from pymc import *
from pymc.Matplot import plot as mplot
import numpy as np

numquestions = 300 # number of test items being simulated
numpeople = 10 # number of participants
numthetas = 1 # number of latent proficiency variables

generating = 0
theta_initial = zeros((numthetas, numpeople))
correctness = np.random.randint(2, size= numquestions * numpeople) == 1 #Produces Error
#correctness = np.random.randint(2, size= numquestions * numpeople) == -1 #all False code runs fine
#correctness = np.random.randint(2, size= numquestions * numpeople) != -1 #all True code throws error message

correctness.shape = (numquestions, numpeople)


# theta (proficiency params) are sampled from a normal distribution
theta = Normal("theta", mu=0, tau=1, value=theta_initial, observed= generating)


# question-parameters (IRT params) are sampled from normal distributions (though others were tried)
a = Normal("a", mu=1, tau=1, value=[[0.0] * numthetas] * numquestions)
# a = Exponential("a", beta=0.01, value=[[0.0] * numthetas] * numquestions)
b = Normal("b", mu=0, tau=1, value=[0.0] * numquestions)

# take vectors theta/a/b, return a vector of probabilities of each person getting each question correct
@deterministic
def sigmoid(theta=theta, a=a, b=b): 
    bs = repeat(reshape(b, (len(b), 1)), numpeople, 1)
    return np.exp(1.0 / (1.0 + np.exp(bs - dot(a, theta))))

# take the probabilities coming out of the sigmoid, and flip weighted coins
correct = Bernoulli('correct', p=sigmoid, value=correctness, observed=not generating)

# create a pymc simulation object, including all the above variables
m = MCMC([a,b,theta,sigmoid,correct])

# run an interactive MCMC sampling session
m.isample(iter=20000, burn=15000)


mydict = m.stats()
print(mydict['theta']['mean']) #Get ability parameters for each student
print(mydict['theta']['mean'].mean()) #Should be zero, but returns something link 1.85e18, i.e. an absurdly large value.

Solution

  • I think you have an extra n.exp in your sigmoid function. According to wikipedia, S(t) = 1 / (1 + exp(-t)). I replaced your line 34 with this alternative version:

    return 1.0 / (1.0 + np.exp(bs - dot(a, theta)))
    

    With this I get a mean for theta of 0.08.