Cross Validated Asked on January 7, 2022
In a recent thread, use of adjusted $R^2$ ($R^2_{adj.}$) is mentioned in the context of model selection, e.g.
The adjustment was invented as a solution to problems caused by variable selection
Question: Is there any justification for using $R^2_{adj.}$ for model selection? That is, does $R^2_{adj.}$ have any optimality properties in the context of model selection?
For example, AIC is an efficient criterion and BIC is a consistent one, but $R^2$ does not coincide with any of them and so makes me wonder if it can be optimal in any other sense.
I would propose six optimality properties.
Overfit Mitigation
What kind of model is overfit? In part, this depends on the model's use case. Suppose we are using a model to test whether a hypothesized factor-level relationship exists. In that case a model which tends to allow spurious relations is overfit.
"The use of an adjusted R2...is an attempt to account for the phenomenon of the R2 automatically and spuriously increasing when extra explanatory variables are added to the model." Wikipedia.
Simplicity and Parsimony
Parsimony is valued on normative and economic rationale. Occam's Razor is an example of a norm, and depending on what we mean by "justification," it might pass or fail.
The economic rationale for simplicity and parsimony are harder to dismiss:
Given two models with equal explanatory power (R2), then, AR2 selects for the simpler and more parsimonious model.
General Shared Understanding
Justification involves shared understanding. Consider a peer-review situation. If the reviewer and the reviewed lack a shared understanding of model selection, questions or rejections may occur.
R2 is an elementary statistical concept and those only familiar with elementary statistics still generally understand that R2 is gameable and AR2 is preferred to R2 for the above reasons.
Sure, there may be better choices compared to AR2 such as AIC and BIC, but if the reviewer is unfamiliar with these then their use may not succeed as a justification. What's worse, the reviewer may have a misunderstanding themselves and required AIC or BIC when they aren't required - that itself is unjustified.
My limited understanding indicates that AIC is now considered rather arbitrary by many - specifically the 2s in the formula. WAIC, DIC, and LOO-CV have been suggested as preferred, see here.
I hope by "justified" we don't mean "no better parameter exists" because it seems to me that some better parameter might always exist unbeknownst to us, therefore this style of justification always fails. Instead "justified" ought to mean "satisfies the requirement at hand" in my view.
Semi-Efficient Factor Identification
Caveat: I made up this term and I could be using it wrong :)
Basically, if we are interested in identifying true factor relations, we should expect p < 0.5, ie P(B) > P'(B). AR2 maximization satisfies this as adding a factor with p >= 0.5 will reduce AR2. Now this isn't an exact match because I think AR2 generally penalizes p > 0.35-ish.
It's true AIC penalizes more in general but I'm not sure that's a good thing if the goal is to identify all observed features that have an identifiable relation, say at least directionally, in a given data set.
Robustness to Sample Size Change
In the comments of this post, Scortchi - Reinstate Monica notes that it "makes no sense to compare likelihoods (or therefore AICs) of models fitted on different nos observations." In contrast, r-squared and adjusted r-squared are absolute measures that can be compared with a change in the number of samples.
This might be useful in the case of a questionnaire that includes some optional questions and partial responses. It's of course important to be mindful of issues like response bias in such cases.
Explanatory Utility
Here, we are told that "R2 and AIC are answering two different questions...R2 is saying something to the effect of how well your model explains the observed data...AIC, on the other hand, is trying to explain how well the model will predict on new data."
So if the use case is non-predictive, such as in the case of theory-driven, factor-level hypothesis testing, AIC may be considered inappropriate.
Answered by John Vandivier on January 7, 2022
I don't know if $R^2_{text{adj.}}$ have any optimal properties for model selection, but it is surely taught (or at least mentioned) in that context. One reason might be because most students have met $R^2$ early on, so there is then something to build on.
One example is the following exam paper from University of Oslo (see problem 1.) The text used in that course, Regression Methods in Biostatistics Linear, Logistic, Survival, and Repeated Measures Models Second edition by Eric Vittinghoff, David V. Glidden, Stephen C. Shiboski and Charles E. McCulloch mentions $R^2_{text{adj.}}$ early on in their chapter 10 on variable selection (as penalizing less than AIC, for example) but neither it nor AIC is mentioned in their summary/recommendations 10.5.
So it is maybe mostly used didactically, as an introduction to the problems of model selection, and not because of any optimality properties.
Answered by kjetil b halvorsen on January 7, 2022
Answer for part1:
Answered by Oren Ben-Harim on January 7, 2022
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP