Economics Asked on April 2, 2021
Typically in statistical software, the default confidence level is 95%. The higher the better, I suppose. But this is still not a rule of thumb, right? In Stock and Watson’s paper (https://www.aeaweb.org/articles/pdf/doi/10.1257/jep.15.4.101), they use a 66% confidence level. While they did not explain why 66, not 95, I guess it makes some IRFs (which would be insignificant at 95%) more significant. For example, in their original paper, interest rate shock to inflation appears significant at a 66% confidence level, but my estimation at a 95% confidence level shows it is not significantly different from zero. I am not criticizing this paper or anything like that, but it appears the choice of the level of confidence is at the discretion of the researcher, either arbitrarily or purposefully. If this is not the case, what would make one choose a confidence level of 66% and not 95%, or even 99%?
Yes confidence level always depends on choice of a researcher (at least in principle). There is no reason to use $90%$, $95%$ or $99%$ level either - it is more or less just convention. For example, in physics they use three-sigma or five-sigma significance which corresponds to confidence levels of $99,7%$, $99.99997%$ (and even higher ones can be reported).
However, this being said once literature settles at a particular convention you will be fighting an uphill battle. For example, you would have hard time publishing in physics results that are not at least $3sigma$ or $5sigma$ significant without very good explanation. In economics, usually people settle for $95%$ level. Just because you can pick any significance/confidence level that does not mean others will accept it.
They actually do explain why there is $66%$ confidence interval. According to Stock and Watson:
Also plotted are $pm 1$ standard error bands, which yield an approximate 66 percent confidence interval for each of the impulse responses.
When it comes to VAR impulse responses (and forecasting in general) it is conventional to plot $pm 1$ standard error bands. This is also often done when people estimate DSGE models so it is more or less conventional practice. This is ultimately arbitrary choice on which people somehow settled.
Also note reporting 1 standard error deviation error bars does not mean the variable was significant at only $66%$ confidence level - for example $hat{beta}=30$ with $SE(hat{beta} )=1$ would have $t$-stat $=30$, would be significant at any conventional significance used in economics (in any reasonably sized sample). That does not prevent you from constructing 1 standard error bars (i.e. $30 pm 1$).
Correct answer by 1muflon1 on April 2, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP