Computational Science Asked by RockedSalad121 on June 19, 2021
I was wondering if anyone could enlighten me about the convergence properties of the truncated newton method in case of a non-positive definite hessian $nabla^2 f = H$. From the Book ‘Numerical Optimization’ from Nocedal & Wright, the Algorithm consists of two loops, one inner to compute the (Newton-)search direction $p_k$ by using the conjugate gradient method (CG) and one outer loop to compute the stepwise $alpha_k$.
The algorithm roughly looks like this:
Now as mentioned, since $underset{k rightarrow infty}{lim}eta_k = 0$ the convergence of truncated newton method is q-superlinear if $H$ is positive definite. But if the inner loop gets disrupted by $H$ becoming non-positive definite, CG will not fullfill the stopping criteria. Especially if non-convexity gets detected in the first iteration and $p_k = -nabla f$, truncated newton method basically becomes method of greatest descent which has only linear convergence properties. Am I therefore right to assume that truncated newton only has superlinear convergence if $H$ stays positive definite for all times ($k=0,1,2,…$)?
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP