Cross Validated Asked by RoroMario on December 9, 2020
I have built the following models:
full <- lmer(DV~ A*B + (1|speaker), data, REML=FALSE)
A <- lmer(DV~ A+ A:B + (1|speaker), data, REML=FALSE)
B <- lmer(DV~ B+ A:B + (1|speaker), data, REML=FALSE)
interaction <- lmer(DV~ A + B + (1|speaker), data, REML=FALSE)
I use anova to compare the first full model to the other ones:
anova(full, A)
anova(full, B)
anova(full, interaction)
The first two comparisons generated results with both df and chi square values being zeros, as shown below:
However, I have also tried to compare the null model with another model only include A or B:
null <- lmer(DV~ 1 + (1|speaker), data, REML=FALSE)
AA <- lmer(DV~ A + (1|speaker), data, REML=FALSE)
BB <- lmer(DV~ B + (1|speaker), data, REML=FALSE)
AB <- lmer(DV~ A:B + (1|speaker), data, REML=FALSE)
all the comparisons generated reasonable results (i.e. not 0 df and all comparisons are significant)
I have looked online and found this post: https://www.researchgate.net/post/What_is_a_Likelihood_ratio_test_with_0_degree_of_freedom
And my guess is that maybe for my full model, the interaction might be able to predict everything without the main effects (A and B).
I have a few questions:
base <- lmer(DV~ A+B + (1|speaker), data, REML=FALSE)
A <- lmer(DV~ A + (1|speaker), data, REML=FALSE)
B <- lmer(DV~ B + (1|speaker), data, REML=FALSE)
interaction <- lmer(DV~ A*B + (1|speaker), data, REML=FALSE)
Is it ok to report the comparison between the base model and A, B, interaction respectively?
Please find the data file and the R markdown document here: dropbox.com/sh/88m8h6blow2xbn5/AABiNccsUlu3AlfPyamQP4n_a?dl=0
I also asked a question about the procedures I used in the R script in this post R lmer model: add factors or reduce factors
I’d be most grateful if you could help me please. Thank you!
This happens because models full
, A
and B
are in fact the same. They are just parameterised differently. To see this, inspect the estimates for the full model:
(Intercept) 6.03977 0.34949 17.282
AT2 -0.55051 0.07597 -7.246
AT3 -1.16472 0.07597 -15.331
AT4 0.48228 0.07597 6.348
BS -0.64024 0.07597 -8.427
AT2:BS 0.35379 0.10744 3.293
AT3:BS 0.47244 0.10824 4.365
AT4:BS 0.05247 0.10744 0.488
In model A
, we have removed the main effect for the variable B
and then obtain:
Estimate Std. Error t value
(Intercept) 6.03977 0.34949 17.282
AT2 -0.55051 0.07597 -7.246
AT3 -1.16472 0.07597 -15.331
AT4 0.48228 0.07597 6.348
AT1:BS -0.64024 0.07597 -8.427
AT2:BS -0.28645 0.07597 -3.770
AT3:BS -0.16781 0.07710 -2.177
AT4:BS -0.58777 0.07597 -7.737
We immediately see that the estimates for the intercept AT2
- AT4
are the same. The estimate for AT1:BS
in the second model is identical to the estimate for the main effect for B
in the full model (because the second model does not include the main effect for B
). Then, for the same reason, the remaining interaction terms in the second model will be the sum of the main effect for B
in the full model, and the equivalent interaction terms:
> -0.64024 + 0.35379
[1] -0.28645
> -0.64024 + 0.47244
[1] -0.1678
> -0.64024 + 0.05247
[1] -0.58777
I think it is good general advice to always include both main effects in a model which includes their interaction. This type of problem will then not occur.
Answered by Robert Long on December 9, 2020
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP