I am running into a small issue that i can't figure my head around.
I developed a model that I test using lmer
. Full code:
#importing library
library(tidyverse)
library(lmerTest)
library(car)
library(easystats)
library(modelsummary)
library(lme4)
df <- Mod_Final_R1R2_Meta_moderation_sharing
summary(df)
df_x <- subset(df, Category == "x")
modelZ <- lmer(reliability corrected ~ 0 + intention + a + b + (1|Study), data = df_x).
summary(modelZ)
modelsummary(modelZ, statistic="p.value")
I receive (part of) the following output via:
summary(modelZ)
Linear mixed model fit by REML. t-tests use Satterthwaite's method ['lmerModLmerTest']
Formula: `Y` ~ 0 + intention + a + b + c + (1 | Study)
Data: df_x
REML criterion at convergence: 14.3
Scaled residuals:
Min 1Q Median 3Q Max
-1.51502 -0.22449 -0.06608 0.36910 1.52729
Random effects:
Groups Name Variance Std.Dev.
Study (Intercept) 0.22052 0.4696
Residual 0.01198 0.1094
Number of obs: 25, groups: Study, 15
Fixed effects:
Estimate Std. Error df t value Pr(>|t|)
intention -0.10750 0.05989 9.27089 -1.795 0.105
a 0.02907 0.22882 11.72994 0.127 0.901
b 0.12935 0.26092 11.69105 0.496 0.629
c -0.17325 0.22935 12.19998 -0.755 0.464
However, if I extract model summary via either the package modelsummary
or sjPlot
(same output with both packages), I receive the following output:
modelsummary(modelZ, statistic="p.value")
Variable | Estimate | p-value |
---|---|---|
Intention | -0.107 | 0.089 |
a | 0.029 | 0.900 |
b | 0.129 | 0.626 |
c | -0.173 | 0.459 |
As you can see, while the estimates are more or less the same, p-values are quite different, some even on the threshold of non-significant to significant. Does anyone has an idea what could be the underlying mechanisms that caused this? Shouldn't it extract and use the modelx output of lmer?
Thanks in advance!
I searched on forums like Stackoverflow and the CRAN packages for any underlying causes, but didn't find it.
In the absence of a reproducible example, my best guess is that this comes from the method used for finite-size correction. modelsummary()
calls parameters::model_parameters()
to extract parameters; the ci_method
options for mixed models are "wald" (no finite-size correction), "satterthwaite", or "kenward". I believe the ci_method=
argument will get passed through from modelsummary()
to model_parameters()
: you could try ci_method = "satterthwaite"
and see what happens ...
library(modelsummary)
library(parameters)
library(lmerTest)
set.seed(101)
dd <- data.frame(f = factor(rep(1:8, each = 10)), x = rnorm(80))
dd$y <- simulate(~ x + (1|f), newdata = dd,
family = gaussian,
newparams = list(beta = c(0, 0.5), theta = 1, sigma = 0.5),
seed = 101)[[1]]
m1 <- lmer(y ~ x + (1|f), data = dd)
In this example the p-value returned for "satterthwaite" and "kenward" happens to be the same up to three significant digits (for the intercept; we can't see what happens for the slope term. Could tweak the example, e.g. make the second element of beta
smaller ...)
model_parameters(m1, effects = "fixed")
# Fixed Effects
Parameter | Coefficient | SE | 95% CI | t(76) | p
-----------------------------------------------------------------
(Intercept) | 0.05 | 0.09 | [-0.12, 0.23] | 0.61 | 0.541
x | 0.45 | 0.06 | [ 0.33, 0.58] | 7.36 | < .001
Uncertainty intervals (equal-tailed) and p-values (two-tailed) computed
using a Wald t-distribution approximation.
##
model_parameters(m1, effects = "fixed", ci_method = "satterthwaite")
# Fixed Effects
Parameter | Coefficient | SE | 95% CI | t | df | p
------------------------------------------------------------------------
(Intercept) | 0.05 | 0.09 | [-0.16, 0.26] | 0.61 | 7.05 | 0.559
x | 0.45 | 0.06 | [ 0.33, 0.58] | 7.36 | 74.05 | < .001
Uncertainty intervals (equal-tailed) and p-values (two-tailed) computed
using a Wald t-distribution with Satterthwaite approximation.
##
model_parameters(m1, effects = "fixed", ci_method = "kenward")
# Fixed Effects
Parameter | Coefficient | SE | 95% CI | t | df | p
------------------------------------------------------------------------
(Intercept) | 0.05 | 0.09 | [-0.16, 0.26] | 0.61 | 7.02 | 0.559
x | 0.45 | 0.06 | [ 0.33, 0.58] | 7.31 | 74.04 | < .001
Uncertainty intervals (equal-tailed) and p-values (two-tailed) computed
using a Wald t-distribution with Kenward-Roger approximation.