I have two hypotheses (A and B)
H0_A b1<=0; H1_A: b1 >0
H0_B b2>=0; H1_B: b2 <0
To estimate the coefficients b1 and b2 I ran a regression lm(y~x1+x2)
.
My question: how can I get the p-value for every coefficient (b1, b2), accodring to its hypothesis setting, to see if I can reject the null-hypothesis?
When I use the summary()
-function on the regression, the p-values are stated, but I think they only consider the case that the beta is unequal to zero.
Thank you very much!!
The lm()
function defaults to a two-sided alternative hypothesis test. As a cautionary note, you should default to a two-sided alternative, unless you have a strong theoretical basis, a priori, to only be interested in one side. Reproducible examples help the community serve you better. I recommended some code below to help extract your p-values. Adjust the distribution function as needed.
# Extracting your p-values (two-sided alternative)
mod <- lm(y ~ x1 + x2, data = ...)
summary(mod)$coefficient[ ,"Pr(>|t|)"]
# Adjusting you're rejection regions
output <- summary( lm(y ~ x1 + x2, data = ...) )
t <- coef(output)[ ,3] # extracting the t-values
df <- output$df # extracting the degrees of freedom
pt(t, df, lower = ...) # lower = TRUE/FALSE (b < 0 or b > 0, respectively)