Search code examples
rmixed-modelsbetaemmeansglmmtmb

How to interpret odds ratios by emmeans for glmmTMB-beta


I fit this mixed model with beta for the response variable: photochemical efficiency or Fv/Fm and the predictor variables are categorical:

Family: beta  ( logit )
Formula:          FvFm ~ hora * temperatura + (1 | Experimento)
Data: d

     AIC      BIC   logLik deviance df.resid 
  -730.4   -711.2    371.2   -742.4      174 

Random effects:

Conditional model:
 Groups      Name        Variance Std.Dev.
 Experimento (Intercept) 0.0099   0.0995  
Number of obs: 180, groups:  Experimento, 5

Dispersion parameter for beta family ():  262 

Conditional model:
                    Estimate Std. Error z value Pr(>|z|)    
(Intercept)          0.26960    0.04821    5.59 2.24e-08 ***
hora4                0.03181    0.02627    1.21   0.2260    
temperatura20        0.05626    0.02630    2.14   0.0324 *  
hora4:temperatura20 -1.44971    0.03859  -37.56  < 2e-16 ***

I have been trying to interpret a posteriori test, emmeans results with type="response", so I get the odds ratios (exp) of the estimated marginal means for all possible comparison groups.

> emmGrid<-emmeans(m2, specs=pairwise~temperatura*hora, type="response")
> emmGrid
$emmeans
 temperatura hora response      SE  df asymp.LCL asymp.UCL
 2           0       0.567 0.01184 Inf     0.544     0.590
 20          0       0.581 0.01175 Inf     0.558     0.604
 2           4       0.575 0.01179 Inf     0.552     0.598
 20          4       0.251 0.00928 Inf     0.233     0.270

Confidence level used: 0.95 
Intervals are back-transformed from the logit scale 

$contrasts
 contrast                                  odds.ratio     SE  df null z.ratio p.value
 temperatura2 hora0 / temperatura20 hora0       0.945 0.0249 Inf    1  -2.139  0.1408
 temperatura2 hora0 / temperatura2 hora4        0.969 0.0254 Inf    1  -1.211  0.6200
 temperatura2 hora0 / temperatura20 hora4       3.903 0.1101 Inf    1  48.268  <.0001
 temperatura20 hora0 / temperatura2 hora4       1.025 0.0270 Inf    1   0.929  0.7894
 temperatura20 hora0 / temperatura20 hora4      4.128 0.1167 Inf    1  50.149  <.0001
 temperatura2 hora4 / temperatura20 hora4       4.029 0.1138 Inf    1  49.346  <.0001

P value adjustment: tukey method for comparing a family of 4 estimates 
Tests are performed on the log odds ratio scale

I have doubts about this: 1-Is it ok to do this test for glmmtmb beta? 2-if it is possible: how to interpret these odd. ratios?, from what I have read I understand that an odd ratio of 1 indicates no change therefore for odds.ratio close to 4 will indicate that it is four times more likely to occur, only those are significant. 3-odd ratio as shown below the output is in logit scale? Thank you very much!!, any ideas, please post... fran


Solution

  • The outcome of a beta-regression is bound between 0 and 1, thus, the predictions on the response scale should also range between 0 and 1. These can be interpreted as "predicted proportion". Therefore, I would probably use the same response scale for pairwise comparisons or contrasts, which makes it easier to interpret. I guess you can change the scale for emmeans, but you could also use the ggeffects-package to get predictions and contrasts/comparisons. Here is an example, see also my comments in the code:

    library(glmmTMB)
    library(ggeffects)
    data("GasolineYield", package = "betareg")
    GasolineYield$batch <- factor(rep(1:4, each = 8))
    m <- glmmTMB(yield ~ batch + temp, data = GasolineYield, family = beta_family())
    
    emmeans::emmeans(m, specs = pairwise ~ batch, type = "response")
    #> $emmeans
    #>  batch response      SE  df asymp.LCL asymp.UCL
    #>  1       0.2778 0.01879 Inf    0.2425     0.316
    #>  2       0.2126 0.01504 Inf    0.1846     0.244
    #>  3       0.1599 0.01269 Inf    0.1366     0.186
    #>  4       0.0973 0.00974 Inf    0.0798     0.118
    #> 
    #> Confidence level used: 0.95 
    #> Intervals are back-transformed from the logit scale 
    #> 
    #> $contrasts
    #>  contrast        odds.ratio    SE  df null z.ratio p.value
    #>  batch1 / batch2       1.42 0.186 Inf    1   2.712  0.0338
    #>  batch1 / batch3       2.02 0.275 Inf    1   5.173  <.0001
    #>  batch1 / batch4       3.57 0.544 Inf    1   8.350  <.0001
    #>  batch2 / batch3       1.42 0.183 Inf    1   2.707  0.0343
    #>  batch2 / batch4       2.51 0.351 Inf    1   6.556  <.0001
    #>  batch3 / batch4       1.77 0.242 Inf    1   4.148  0.0002
    #> 
    #> P value adjustment: tukey method for comparing a family of 4 estimates 
    #> Tests are performed on the log odds ratio scale
    
    ggemmeans(m, "batch")
    #> # Predicted values of yield
    #> 
    #> batch | Predicted |       95% CI
    #> --------------------------------
    #> 1     |      0.28 | [0.24, 0.32]
    #> 2     |      0.21 | [0.18, 0.24]
    #> 3     |      0.16 | [0.14, 0.19]
    #> 4     |      0.10 | [0.08, 0.12]
    #> 
    #> Adjusted for:
    #> * temp = 332.09
    
    # for glmmTMB-models, we need to explicitly provide the variance-covariance
    # matrix in `hypothesis_test() to get standard errors and confidence intervals
    
    # here we see pairwise comparison on the response scale. Looking at the
    # predicted values in the upper table, we see that the predicted proportion
    # for batch = 1 is 0.28, and for batch = 2 is 0.21. The difference is 7
    # percentage points (0.07). This is what you get in the output below:
    
    hypothesis_test(m, "batch", vcov = vcov(m))
    #> # Pairwise comparisons
    #> 
    #> batch | Contrast |       95% CI |      p
    #> ----------------------------------------
    #> 1-2   |     0.07 | [0.02, 0.11] | 0.007 
    #> 1-3   |     0.12 | [0.07, 0.16] | < .001
    #> 1-4   |     0.18 | [0.14, 0.22] | < .001
    #> 2-3   |     0.05 | [0.01, 0.09] | 0.007 
    #> 2-4   |     0.12 | [0.08, 0.15] | < .001
    #> 3-4   |     0.06 | [0.03, 0.09] | < .001
    #> 
    #> Contrasts are presented on the response scale.
    
    # Here we see pairwise comparison on the OR scale. I'm not quite sure how
    # the odds ratios would translate into a proportion or a "change" in 
    # the proportion, but you see the results are identical to those from
    # emmeans...
    
    hypothesis_test(m, "batch", vcov = vcov(m), scale = "oddsratios")
    #> # Pairwise comparisons
    #> 
    #> batch | Contrast |       95% CI |      p
    #> ----------------------------------------
    #> 1-2   |     1.42 | [1.10, 1.84] | 0.007 
    #> 1-3   |     2.02 | [1.55, 2.64] | < .001
    #> 1-4   |     3.57 | [2.65, 4.81] | < .001
    #> 2-3   |     1.42 | [1.10, 1.83] | 0.007 
    #> 2-4   |     2.51 | [1.90, 3.30] | < .001
    #> 3-4   |     1.77 | [1.35, 2.31] | < .001
    #> 
    #> Contrasts are presented on the odds ratio scale.
    

    Created on 2023-11-09 with reprex v2.0.2