Cross Validated Asked on December 29, 2021
I have a rather complex non-linear mixed model. It’s intended to model exponentially decaying responses to transient events in animal behaviour experiments. In addition, I’ve used latent variables to model a non-Gaussian $text{AR}(1)$ residual error process, as serial correlation is a serious issue with my data as the responses I’m interested in are short in duration. I’ve implemented this in TMB in R.
When I was testing this on simulated data, the model would never fit correctly. Patterns from the exponential response would consistently be mistaken for the $text{AR}(1)$ process.
Assuming that serial correlation only results in a bias of residual error variance estimation, not the parameter estimates themselves, $beta$, I tried fitting the model in two steps:
Fit the initial mixed model, assuming no AR(1) process is present. As expected, this resulted in accurate $beta$ estimates, but erroneous residual errors when compared to my simulation parameters.
I then refit the model, holding the $beta$ paramaters fixed and the random effects, and only estimated the terms related to residual error and $text{AR}(1)$ terms.
Using this process, my model consistently estimated the correct $beta$ and $text{AR}(1)$ when testing in simulated data that met the assumptions of my model. I then use parametric bootstrapping to generate confidence intervals and hypothesis testing.
So this feels like a very hackey way to make a model fit… but it appears to match well with my simulation parameters. While the results look right, I was wondering if there may be other theoretical issues that I’m overlooking.
My questions are…
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP