I have defined a log-likelihood function and I have one variable being sampled over a uniform distribution. I made sure the log-likelihood function returns the same result for same input. But when I sample, every time the distribution is somewhat different (over the same range).
What is going on?
import pymc3 as mc
import theano.tensor as tt
SAMPLES = 1000
TUNING_SAMPLES = 100
N_CORES = 10
N_CHAINS = 2
#(logl_ThetaFromChoices is defined above with the input)
# use PyMC3 to sampler from log-likelihood
with mc.Model() as modelFindTheta:
theta = mc.Uniform('theta', lower=-200.0, upper=200.0)
# convert m and c to a tensor vector
theta = tt.as_tensor_variable(theta)
def callOp(v):
return logl_ThetaFromChoices(v)
mc.DensityDist('logl_ThetaFromChoices', callOp, observed={'v': theta})
step1 = mc.Metropolis()
trace_theta = mc.sample(SAMPLES,
tune=TUNING_SAMPLES,
discard_tuned_samples=True,
chains=N_CHAINS,
cores=N_CORES,
step=step1)
Since it involves random number generation, one needs to set a seed to obtain reproducible results. For PyMC3, this is done with the random_seed
argument in the pymc3.sampling.sample() method.