I have estimated a times series model including multiplicative indicator variables in R. The model looks like so:
dynlm(returns.ts[,1] ~ 1 + Dummy.ts + Mkt.Rf + Mkt.Rf:Dummy.ts + SMB + SMB:Dummy.ts + HML + HML:Dummy.ts + RMW + RMW:Dummy.ts + CMA + CMA:Dummy.ts)
Dummy.ts is a coefficient indicating bull or bear periods on the stock market coded as 0 during bull markets and 1 during bear markets. If I have understood correctly the intercept by it self is the bull intercept and the intercept + Dummy.ts is the bear intercept.
Now I would like to test if the intercept plus the Dummy.ts indicator variable is significant. I do not want to perform a F-test or LR test, only add the coefficients together in order to test if the intercept is significant in the bear period. Is this possible? How would this be performed in R? Is there a standardized way? Is it possible to use Newey West standard errors?
Thank you.
What about building a simpler nested model without the new 'Dummy.ts' variable and calling anova() on your two models?
mod0 <- (returns ~ 1 + Mkt.rf + ..., data = datf)
mod1 <- (returns ~ 1 + Dummy.ts + Mkt.rf + ..., data = datf)
anova(mod0, mod1)
The last column is will provide you a Pr(>Chisq) value that you can compared to your p-value threshold (~.05), to determine whether your new predicted is significant? Keep in mind the models have to be nested, meaning all of the predictors must be conserved except for the new one being tested.