TransWikia.com

Deriving the Euler equation from a Continuous Time Dynamic Programming Problem (HJB)

Economics Asked on July 18, 2021

Solving for the Euler equation in discrete time is farily straight forward with the use of the Benveniste Scheinkman theorem. However for the following standard Ramsey model:
$$max int_{0}^{infty}e^{-rt} u(c(t))dt$$
subject to:
$$u(c(t))=ln(c(t))$$
$$dot{k}(t)=Ak^theta-delta k(t)-c(t)$$

the problem is simple if I use the Hamiltonian however if I write the Hamilton-Jacobi-Bellman equation for this:
$$rV(k(t))=ln(c(t))+V'(k(t))dot{k}$$

I have no clue how to get started. How would one go about solving for the Euler Equation using the corresponding HJB equation?

One Answer

As already commented, the equation you probably meant is $$ rho V(k)= sup_c {, u(c) + V'(k) ( f(k) -delta k -c ) ,}. $$ I have never seen this equation called the HJB equation (probably missing a basic reference on my part). I'll call it "dynamic programming PDE".

What you're really asking is the connection between two approaches to solve control problems---Calculus of Variations/Optimal Control vs. Dynamic Programming. Optimal Control (Pontryagin's Maximum Principle) is a first-order perturbation argument and Dynamic Programming Princple is a backward induction argument.

"Euler equation" arises from a first-order perturbation argument. (In continuous-time, it's a classical Calculus of Variation equation. In discrete time, I've only heard it used in economics, describing intertemporal consumption smoothing, but it's a perturbation argument just the same.)

In continuous-time, first-order perturbation of optimal path means a perturbation along the entire path. In contrast, in discrete time, it suffices to perturb a single period--therefore one can obtain the Euler equation by simply differentiating the Bellman equation (under e.g. Benveniste-Scheinkman assumptions that ensure differentiability of value function).

In continuous-time, I don't believe the answer is as trivial. There is, however, a classical connection between Optimal Control and Dynamic Programming via the method of characteristics. Part of the connection is that, if the value function is sufficiently smooth, then the characteristic equations of the dynamic programming PDE gives Pontryagin's Maximum Principle.

In your growth example, the maximizer of RHS is given by $u'(c^*) = V'$. Substituting then gives an implicit first order ODE $$ F(k, V, V') = rho V - u(c^*( V')) - V'cdot( f(k) -delta k) + V' cdot c^*( V') = 0. $$ Following the method of characteristics heuristicly, one of the characteristic equations would be begin{align*} frac{d}{dt} (V') &= -lambda (F_k + F_V V') &= -lambda (- V' f'(k) + V' rho), end{align*} for some $lambda geq 0$. Since $u'(c^*) = V'$, begin{align*} frac{d}{dt} log (u'(c^*)) &= -lambda (F_k + F_V V') &= lambda ( f'(k) - rho), end{align*} which is an Euler equation.

(Incidentally, the ODE $F(k, V, V')$ does not seem amenable to guess-and-verify, even when $u(c) = log c$.)

Correct answer by Michael on July 18, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP