Search code examples
rrocconfusion-matrixproc-r-package

pROC median Sensitivity vs. manual Sensitivity calculation - different Results


Calculating the sensitivity manually from the confusion matrix, gives the value 0.853.

  • TN = 16
  • FP = 7
  • FN = 5
  • TP = 29

The output of pROC is different (median = 0.8235).

y_test = c(1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1,
       0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0,
       0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0)

y_pred_prob = c(0.63069148, 0.65580015, 0.9478634 , 0.94471701, 0.24756774,
       0.51969906, 0.26881201, 0.6722361 , 0.30275069, 0.61676645,
       0.76116789, 0.90867332, 0.31525658, 0.10681422, 0.6890589 ,
       0.25185641, 0.54820684, 0.7175465 , 0.57194733, 0.71304872,
       0.98805141, 0.92829077, 0.38150015, 0.97653216, 0.96036858,
       0.75878699, 0.95466371, 0.52292342, 0.28296724, 0.5660834 ,
       0.91581461, 0.49574317, 0.79025422, 0.14303487, 0.66885536,
       0.07660444, 0.10342033, 0.53661914, 0.04701796, 0.83313871,
       0.37766607, 0.89157993, 0.47731778, 0.62640482, 0.47664294,
       0.0928437 , 0.13605622, 0.2561323 , 0.95572329, 0.49051571,
       0.49267652, 0.92600581, 0.48464618, 0.96006108, 0.01548211,
       0.56057243, 0.82257937)

set.seed(99)
boot = 2000
rocobj <- roc(y_test, y_pred_prob)
print(ci.thresholds(rocobj,.95, thresholds =  0.5, method = 'bootstrap',boot.n = boot))

OUT:    95% CI (2000 stratified bootstrap replicates):
     thresholds sp.low sp.median sp.high se.low se.median se.high
      0.5002624 0.5652    0.7391   0.913 0.6765    0.8235  0.9412

Is this a result of the bootstrapping method? Because it is a median?


Solution

  • What threshold did you use?

    You need to be careful when you report and analyze the results of a confusion matrix. When you have numeric predictions, you must consider at which threshold this table was generated. Given the numbers in it, I will assume you used a threshold of 0.495 or something close to that, which allowed me to obtain the same numbers as you:

    > table(y_test, y_pred_prob > 0.495)
          
    y_test FALSE TRUE
         0    17    6
         1     5   29
    

    How to get the empirical sensitivity and specificity from pROC?

    Now that we have a threshold to work with, we can extract the data for this threshold from pROC with the coords function:

    > coords(rocobj, 0.495, "threshold", transpose = FALSE)
      threshold specificity sensitivity
    1     0.495   0.7391304   0.8529412
    

    This is exactly the sensitivity you calculated.

    What about boostrapping?

    As you suspected, the boostrapping that is used to calculate the confidence intervals is a stochastic process and the median of the resampled curves is going to be different from the empirical value.

    However for a median with 2000 bootstrap replicates we get pretty close:

    > set.seed(99)
    > print(ci.thresholds(rocobj,.95, thresholds =  0.495, method = 'bootstrap',boot.n = boot))
    
    95% CI (2000 stratified bootstrap replicates):
     thresholds sp.low sp.median sp.high se.low se.median se.high
          0.495 0.5652    0.7391   0.913 0.7353    0.8529  0.9706