I'm calculating confidence intervals for the dataset below and am using two methods: (1) following R code I found online and (2) manually.
R version:
data <- c(187, 181, 178, 188, 173, 179, 177, 172, 187, 193, 189, 183, 169, 179, 173, 175, 174, 183, 175, 172, 167, 189, 183)
x <- mean(data)
n <- length(data)
s <- sd(data)
margin <- qt(0.8,df=n-1)*s/sqrt(n)
low <- x - margin
high <- x + margin
low
high
The results are: (178.109, 180.6736)
Manually:
I transform the one sample t-Student test formula:
t = (x - u)/(s/sqrt[n])
u = x + t*(s/sqrt[n])
u = 179.3913 + 1.32124*(7.165188/4.795832)
where x = mean, u = theoretical mean, s = standard deviation, n = number of observations, t = critical value at 80%.
The results are: (177.4173, 181,3653)
The CI calculated with both methods differ. However, the CI at 90% calculated by R are identical with those calculated manually at 80%. I suppose it's all affected by differences in underlying critical value tables. On the other hand, can the differences be so big? 80–90% CI identical depending on method? Any explanations? Thanks!
When alpha == 0.2 then:
qt(p = 1 - alpha / 2, df = n - 1)
is used to calculate the 80% CI and the resulting CI is (177.4173, 181,3653). However, you used:
qt(0.8, df = n - 1)
to calculate the margin resulting in (178.109, 180.6736). However, this is actually the 60% CI since solving 0.8 = 1 - alpha / 2 yields alpha = 0.4 and (1 - alpha) * 100% = 60%. You can check this by running:
t.test(data, conf.level = 0.6)
When alpha == 0.1 the 90% CI is (176.8258, 181.9568) and you can check by running:
t.test(data, conf.level = 0.9)
The corresponding critical value to the 90% CI is:
qt(1 - 0.1 / 2, df = n-1)
which is == 1.717144. You can also find this value from t-table link. When df = 22 then t_0.05 = 1.717.