TransWikia.com

How to test if NonlinearModelFit parameters are different from each other

Mathematica Asked by Axel on December 3, 2020


I’m using NonlinearModelFit to fit 5 parameters to some data points and the “ParameterConfidenceIntervalTable” gives me the estimates with the CIs. Now I want to know if the estimated values of the parameters are statistically different. Obviously this is the case if the CIs don’t overlap, but unfortunately this can also be the case if there is a slight overlap (see [here](https://statisticsbyjim.com/hypothesis-testing/confidence-intervals-compare-means/)). So the question is:
What is the proper way to test if the parameter estimates are significantly different ??
Thanx

One Answer

The advice from "statisticsbyjim.com" (not at all associated with me) only applies if you have independent estimators. But if the estimators are not independent (which is the case most of the time with a regression - linear or nonlinear), then you'll need to consider the lack of independence.

If the estimators NonlinearModelFit have approximately a normal distribution, then you can use the estimate of the covariance matrix to perform testing of the equivalence of parameters.

Taking an example from the NonlinearModelFit documentation:

data = BlockRandom[SeedRandom[12345];
   Table[{x, Exp[-2.3 x/(11 + .4 x + x^2)] + RandomReal[{-.5, .5}]}, {x, RandomReal[{1, 15}, 20]}]];
nlm = NonlinearModelFit[data, Exp[a x/(b + c x)], {a, b, c}, x];

Now grab the parameter estimates and the covariance matrix:

estimates = {a, b, c} /. nlm["BestFitParameters"]
cov = nlm["CovarianceMatrix"]

Construct the "z" statistics for each of the 3 possible comparisons:

zab = (estimates[[1]] - estimates[[2]])/Sqrt[cov[[1, 1]] + cov[[2, 2]] - 2 cov[[1, 2]]]
(* -28.276 *)
zac = (estimates[[1]] - estimates[[3]])/Sqrt[cov[[1, 1]] + cov[[3, 3]] - 2 cov[[1, 3]]]
(* -0.422041 *)
zbc = (estimates[[2]] - estimates[[3]])/Sqrt[cov[[2, 2]] + cov[[3, 3]] - 2 cov[[2, 3]]]
(* 1.13192 *)

If one ignores any adjustment for multiple comparisons, then one would reject the hypothesis of equality any time the absolute value of the resulting z-statistic is greater than 1.96. If one still ignores an adjustment for multiple comparisons but wants to be more conservative, then using the following $t$-value rather than 1.96 is appropriate:

(* Error degrees of freedom *)
df = nlm["ANOVATableDegreesOfFreedom"][[2]];

(* t-value *)
tValue = InverseCDF[StudentTDistribution[df], 0.975]
(* 2.10982 *)

An alternative is to perform a bootstrap and compute confidence intervals for the differences or ratios of the parameters.

Correct answer by JimB on December 3, 2020

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP