Search code examples
rlinuxstatisticsbayesianstan

Request to help understand an apparent discrepancy between tidybayes::add_predicted_draws and brms::posterior_predict


I am using the Howell1 dataset from rethinking package.

require(cmdstanr)
require(brms)
require(tidybayes)

data("Howell1")
d <- Howell1
d2 <- d[d$age > 18,]

d2$hs <- (d2$height - mean(d2$height))/ sd(d2$height)
d2$ws <- (d2$weight - mean(d2$weight))/ sd(d2$weight)

Building a simple brms model using one numeric and one categorical predictor

priors <- c(prior(normal(0,2), class = "Intercept"),
            prior(normal(0,2), class = 'b'),
            prior(cauchy(0,2), class = "sigma"))

m4.4 <- brm(formula = hs ~ 1 + ws + male, data = d2, family = gaussian,
          backend = "cmdstanr", prior = priors
          iter = 2000, warmup = 1000, chains = 4, cores = 4)

I am trying to understand how add_fitted_draws and add_predicted_draws work.

Considering add_fitted_draws:

i <- 4 # looking at the results for a particular row of the input dataset
y <- posterior_epred(m4.4)
x <- d2 %>% add_fitted_draws(model = m4.4, value = "epred")
x %>% as_tibble() %>% filter(.row ==i) %>% dplyr::select(epred) %>% cbind(fitdr = y[,i]) %>% mutate(diff = fitdr - epred)

Based on the documentation add_fitted_draw internally uses posterior_epred or its equivalent in brms and the results exactly match.

add_fitted_draaws

Now when I move on to do exactly the same between add_predicted_draws and posterior_predict, the results don’t match

yp <- posterior_predict(m4.4)
xp <- d2 %>% add_predicted_draws(model = m4.4, prediction = "pred")
xp %>% as_tibble() %>% filter(.row ==i) %>% dplyr::select(pred) %>% cbind(preddr = yp[,i]) %>% mutate(diff = preddr - pred)

add_predicted_draws

I am pretty sure there is a gap in my understanding, please advice.

The sessioninfo is as follows:

R version 4.0.3 (2020-10-10)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 20.04.1 LTS

Matrix products: default
BLAS: /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.9.0
LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.9.0

other attached packages:
[1] stringr_1.4.0 readr_1.4.0 tibble_3.0.4 tidyverse_1.3.0 MASS_7.3-53 bayesplot_1.8.0 cmdstanr_0.1.3 rethinking_2.13
[9] loo_2.4.1 gganimate_1.0.7 RColorBrewer_1.1-2 ggrepel_0.9.0 brms_2.14.4 Rcpp_1.0.5 rstan_2.21.2 StanHeaders_2.21.0-7
[17] cowplot_1.1.1 ggplot2_3.3.3 tidybayes_2.3.1 ggdist_2.4.0 modelr_0.1.8 tidyr_1.1.2 forcats_0.5.0 purrr_0.3.4
[25] dplyr_1.0.2 magrittr_2.0.1

PS: I have posted this question on stan discourse as well, but yet to receive a response. Also please let me know if this better posted in stats.stackexchange. Since this was more of a tool based question and not a conceptual one I have posted it here.


Solution

  • It turned out that the discrepancy was arising due to seed setting not being honored by brms::posterior_predict

    enter image description here

    Upon discussing with developer of brms package in github, he root caused the issue to be the following:

    If you have set options(mc.cores = <more than 1>), posterior_predict will evaluate in parallel by default, unless you change the core argument. On windows, parallel execution is done via parallel::parLapply and I don't know how that function respects seeds, if at all. When executing the code in serial (with 1 core) the results are reproducible.

    Once I set the mc.cores to 1, I no longer see the discrepancy between posterior_predict and add_predicted_draws.

    Hence I am marking the issue as resolved.

    The relevant github links are:

    1. https://github.com/mjskay/tidybayes/issues/280
    2. https://github.com/paul-buerkner/brms/issues/1073