I've had trouble finding a tutorial/example of this so wanted to ask: I have a variable Xi that is measured i times, I wanted to show that with each additional measurement the prediction of X's distribution becomes tighter. Of course I could just keep rerunning the model with 1:2 1:3 1:4 etc. But this is tedious. I was hoping there was some stepwise coding I wasnt aware of.
#----------------------------------------------------------------------
#THE JAGS MODEL FOR X.
#----------------------------------------------------------------------
modelstring="
model {
#prior
#------------------------------------------------------------------------------
mu_x ~ dnorm(0,1E-12)
sd ~ dunif(0,50)
tau <- sd*sd
prec_x <- 1/tau
#LIKELIHOOD
#------------------------------------------------------------------------------
for (i in 1:total) {
x[i] ~ dnorm(mu_x,prec_x)
}
pred.x ~ dnorm(mu_x,prec_x)
}
"
Anyone know of a way to specify the model to estimate a pred.x at each timepoint based on the data available at that point?
To do this with a single model file, you'd have to use a different mu_x and prec_x for each prediction, because they'll have different (more diffuse) posterior distributions if they are based on less data. So wrap the whole thing in a loop over j, use something like
for (i in 1:j) {
x[i,j] ~ dnorm(mu_x[j],prec_x[j])
and put j subscripts on everything else. Finally you'd have to supply x as a matrix of replicates of the original x. You could use a data{ } block to facilitate that (see section 7.0.4 of the manual).