Search code examples
rjagsrjags

What is the algorithm of jags with discrete likelihood and continuous prior


I am trying to understanding the following rjags code.

 library(rjags)

 set.seed(1)
 N <- 10
 p <- rep(10,N)


 cat("


 model {
 for (i in 1:N) {
 p[i] ~ dpois(lambda)
  }
 lambda <- 2*exp(-2*alpha*3)/(2*pow(4,2))
 alpha ~ dnorm(beta,tau)T(0,0.2)
 beta ~ dnorm(0,10000)
 tau ~ dgamma(2,0.01)
  }", file= "example1.jag")


  jags <- jags.model('example1.jag',data = list( "N" = N,"p"=p))
 update(jags, 16000)
 out_ex1<-jags.samples(jags, 'alpha',4000)
 out_ex1$alpha

It has poisson likelihood and normal prior, so there is no closed form for Gibbs Sampling. Then what MCMC method is used here? ARS? Slice Sampling? or Metropolis Hasting?


Solution

  • You can always find out which samplers JAGS is using for stochastic variables using rjags::list.samplers - for example:

    > list.samplers(jags)
    $`base::RealSlicer`
    [1] "alpha"
    
    $`base::RealSlicer`
    [1] "beta"
    
    $`base::RealSlicer`
    [1] "tau"
    

    In this case this tells you that a slice sampler is being used for each of the three unobserved stochastic nodes in your model. Slice sampling is the main workhorse in JAGS so this is quite typical, except where a more efficient (e.g. conjugate) sampler is available (or if the GLM module is loaded for an appropriate model).