Search code examples
pymc3

Difference between BetaBinomial and "Beta and Binomial"


I created two models in pymc3. One is with pm.BetaBinomial and another is with pm.Beta and pm.Binomial.

If we use pymc3.BetaBinomial, $\alpha$ and $\beta$ converge.

import numpy as np
from scipy.stats import binom, beta
import pymc3 as pm
%matplotlib inline

np.random.seed(42)
''' we have values of X and n '''
n_data = np.random.randint(10, 30, size=1000)
x_data = binom.rvs(n_data, beta.rvs(10, 20, size=1000))

with pm.Model() as model0:
    alpha0 = pm.Uniform('alpha', 1, 100)
    beta0 = pm.Uniform('beta', 1, 100)
    X0 = pm.BetaBinomial('X', alpha=alpha0, beta=beta0, n=n_data, observed=x_data)
    trace0 = pm.sample(25000, step=pm.Metropolis(vars=[alpha0, beta0, X0]))

enter image description here

But if we combine pymc3.Beta and pymc3.Binomial, then $\beta$ does not seem to converge and the trace looks very different. (Note that $\alpha=10$ and $\beta=20$.)

with pm.Model() as model1:
    alpha1 = pm.Uniform('alpha', 1, 100)
    beta1 = pm.Uniform('beta', 1, 100)
    p1 = pm.Beta('prob', alpha=alpha1, beta=beta1)
    X1 = pm.Binomial('X1', n=n_data, p=p1, observed=x_data)
    trace1 = pm.sample(25000, step=pm.Metropolis(vars=[alpha1, beta1, p1, X1]))

enter image description here

Why does the above difference happen? The above two models are theoretically equivalent. So I think that my usage of pymc3 is problematic.


Solution

  • You need a separate p for each sample (the shape parameter is new):

    with pm.Model() as model1:
        alpha1 = pm.Uniform('alpha', 1, 100)
        beta1 = pm.Uniform('beta', 1, 100)
        p1 = pm.Beta('prob', alpha=alpha1, beta=beta1, shape=1000)
        X1 = pm.Binomial('X1', n=n_data, p=p1, observed=x_data)
        trace1 = pm.sample(2000, tune=1000)
    

    I switched to NUTS, too. You can't really fit high dimensional models using the Metropolis sampler.