I've just switched from R2jags from R2OpenBUGS, and have noticed something I don't understand. After running the simulation using jags()
and converting the output using as.mcmc()
, the first sample always has very high deviance, and is typically very far from the converged parameter estimate. Running the same data with bugs()
this sample does not appear. It's almost like the first sample is the actual first sample from the burn-in phase.
Reproducible code, including a bad initial estimate to show the bad parameter in the first sample of the jags()
but not bugs()
output.
require(R2jags); require(R2OpenBUGS); require(mcmcplots)
set.seed(1)
x <- rnorm(100)
y <- 2*x + rnorm(100)
jags.model <- function()
{
# likelihood
for( i in 1:n ){
mu[i] <- alpha + beta * x[i]
y[i] ~ dnorm( mu[i], tau )
}
# priors
alpha ~ dnorm(0,0.001)
beta ~ dnorm(1,0.001)
tau ~ dgamma(1,1)
sigma <- 1/sqrt(tau)
}
n <- length(x)
inits <- function() list( "alpha"=5,"beta"=5,"tau"=5 ) # very far initial estimate
dat <- list("x","y","n")
out.jags <- jags( dat,
inits=inits, model=jags.model,
n.iter=1000, n.thin=1, n.chains=2,
DIC=TRUE,
parameters.to.save=c("alpha","beta") )
codaout.jags <- as.mcmc(out.jags)
out.bugs <- bugs( dat,
inits=inits, model=jags.model,
n.iter=1000, n.thin=1, n.chains=2,
DIC=TRUE,
parameters.to.save=c("alpha","beta") )
codaout.bugs <- as.mcmc.bugs(out.bugs)
plot(codaout.jags)
x11(); plot(codaout.bugs)
For posterity, the problem is that R2jags does not always handle the burn in period properly, as discussed here.