Search code examples
rbayesianmcmc

LaplacesDemon: when should I sum prior density?


I am migrating from JAGS to LaplacesDemon and trying to rewrite some of my codes. I have read the LaplacesDemon Tutorial and LaplacesDemon Examples vignettes and am a bit confused about some of examples in the vignettes.

In the simple example in LaplacesDemon Tutorial (p.5), the model is written as:

Model <- function(parm, Data)
{beta <- parm[Data$pos.beta]
 sigma <- interval(parm[Data$pos.sigma], 1e-100, Inf)
 parm[Data$pos.sigma] <- sigma
 beta.prior <- dnormv(beta, 0, 1000, log=TRUE)
 sigma.prior <- dhalfcauchy(sigma, 25, log=TRUE)
 mu <- tcrossprod(beta, Data$X)
 LL <- sum(dnorm(Data$y, mu, sigma, log=TRUE))
 LP <- LL + sum(beta.prior) + sigma.prior
 Modelout <- list(LP=LP, Dev=-2*LL, Monitor=LP,
 yhat=rnorm(length(mu), mu, sigma), parm=parm)
 return(Modelout)}

Here, the beta.prior was summed up for LP as there are more than one beta parameters.

But I found in the more advanced examples in the LaplacesDemon Example vignette, it doesn't seem to always follow the rule. Such as in example 87 (p.162):

Model <- function(parm, Data)
{### Log-Prior
 beta.prior <- sum(dnormv(beta[,1], 0, 1000, log=TRUE), dnorm(beta[,-1], beta[,-Data$T], matrix(tau, Data$K, Data$T-1), log=TRUE))
 zeta.prior <- dmvn(zeta, rep(0,Data$S), Sigma[ , , 1], log=TRUE)
 phi.prior <- sum(dhalfnorm(phi[1], sqrt(1000), log=TRUE), dtrunc(phi[-1], "norm", a=0, b=Inf, mean=phi[-Data$T], sd=sigma[2], log=TRUE))
 ### Log-Posterior
 LP <- LL + beta.prior + zeta.prior + sum(phi.prior) + sum(kappa.prior) + sum(lambda.prior) + sigma.prior + tau.prior
 Modelout <- list(LP=LP, Dev=-2*LL, Monitor=LP, yhat=rnorm(prod(dim(mu)), mu, sigma[1]), parm=parm)
 return(Modelout)}

(Put only part of the codes owing to the length of the example codes)

Here, zeta is more than one but wasn't summed in either the Log-Prior or Log-Posterior part, beta is more than one and was summed in Log-Prior and phi is also more than one parameters but it was summed in both Log-Prior and Log-Posterior parts.

And in the next example on p.167, it seems to be different again.

I was wondering in what scenario we should sum the prior density? Many thanks!


Solution

  • Have you tried running the code line by line? You would learn that there is nothing to sum since dmvn is the density function of multivariate normal distribution and it returns a single value -- probability density of observing vector zeta. The reason for all the sums is that to obtain probability of observing two independent events together we multiply their marginal probabilities (or sum their logs). So we multiply the probabilities of observing all the priors together to obtain their joint distribution.