Mathematics Asked by nomadicmathematician on November 1, 2021
Let $(X_t)_{tge 0}$ be an $mathscr{F}_t$ adapted, $d$-dimensional stochastic process with (right-)continuous paths and $sigma$ be a stopping time.
The usual Markov process is the following relation: We have for all $t ge 0$, $u in mathscr{B}_b(mathbb{R}^d)$ and $P$ almost all $omega in {sigma < infty}$
$$E[u(X_{t+ sigma})|mathscr{F}_{sigma+}](omega) = E[u(X_t+x)]|_{x=X_sigma(omega)}=E^{X_sigma(omega)}u(X_t).$$
I would like to derive from this, the more general property that for all bounded $mathscr{B}(C)/mathscr{B}(mathbb{R})$ measurable functionals $Psi:C[0,infty) to mathbb{R}$ which may depend on a whole path and $P$ almost all $omega in {sigma < infty}$ this becomes
$$E[Psi(X_{bullet+ sigma})|mathscr{F}_{sigma+}]=E[Psi(X_bullet+ x)]|_{x=X_sigma}=E^{X_sigma}[Psi(X_bullet)].$$
I think the way to show this is using the monotone class theorem since $mathscr{B}(C)$ is the intersection of $mathscr{C}[0,infty)$ and $pi_t^{-1}(A)$ for $A in mathbb{R}^d$. Clearly, linearity and monotonicity conditions are satisfied, hence we need only show that for any finite $t_1< t_2 < cdots <t_n$, and $A_1, cdots , A_n$, we have the second relation for $Psi = I(pi_{t_1, dots , t_n}^{-1}(A_1 times cdots times A_n))$, where $I$ is the indicator. I think we can show this using the first relation, but I am having difficulty extending this. How could we extend this for $n>1$? I would greatly appreciate any help.
For fixed $t_1 < ldots < t_n$ and measurable sets $A_i$, the functional $Psi=1_{pi_{t_1,ldots,t_n}^{-1}(A_1 times ldots times A_n)}$ satisfies $$Psi(X_{bullet+sigma}) = prod_{j=1}^n 1_{A_j}(X_{t_j+sigma}).$$
By the tower property, we have
begin{align*} mathbb{E}(Psi(X_{bullet+sigma}) mid mathcal{F}_{sigma+}) &= mathbb{E} bigg[ mathbb{E} bigg( prod_{j=1}^n 1_{A_j}(X_{t_j+sigma}) mid mathcal{F}_{(t_{n-1}+sigma)+} bigg) mid mathcal{F}_{sigma+} bigg]. end{align*}
Since the random variables $X_{t_j+sigma}$ are $mathcal{F}_{(t_{n-1}+sigma)+}$-measurable for $j=1,ldots,n-1$, it follows using the Markov property (from your first display) that begin{align*} mathbb{E}(Psi(X_{bullet+sigma}) mid mathcal{F}_{sigma+}) &= mathbb{E} bigg[ prod_{j=1}^{n-1} 1_{A_j}(X_{t_j+sigma}) mathbb{E}(1_{A_n}(X_{t_n+sigma}) mid mathcal{F}_{(t_{n-1}+sigma)+}) mid mathcal{F}_{sigma+} bigg] \ &=mathbb{E} bigg[ prod_{j=1}^{n-1} 1_{A_j}(X_{t_j+sigma}) mathbb{E}^{X_{t_{n-1}+sigma}}(1_{A_n}(X_{t_n-t_{n-1}})) mid mathcal{F}_{sigma+} bigg]. end{align*}
Now we repeat the procedure: next we condition on $mathcal{F}_{(t_{n-2}+sigma)+}$ and obtain that
begin{align*} mathbb{E}(Psi(X_{bullet+sigma}) mid mathcal{F}_{sigma+}) = mathbb{E} bigg[ prod_{j=1}^{n-2} 1_{A_j}(X_{t_j+sigma}) mathbb{E}^{X_{t_{n-2}+sigma}}bigg(1_{A_{n-1}}(X_{t_{n-1}-t_{n-2}}) mathbb{E}^{X_{t_{n-1}-t_{n-2}}}(1_{A_n}(X_{t_n-t_{n-1}})) bigg) mid mathcal{F}_{sigma+} bigg]. end{align*}
By iteration, this gives
$$mathbb{E}(Psi(X_{bullet+sigma}) mid mathcal{F}_{sigma+})=mathbb{E}^{X_sigma}bigg(1_{A_1}(X_{t_1}) mathbb{E}^{X_{t_1}}big(1_{A_2}(X_{t_2-t_1}) mathbb{E}^{X_{t_2-t_1}}( ldots mathbb{E}^{X_{t_{n-1}-t_{n-2}}}(1_{A_n}(X_{t_n-t_{n-1}})) big) bigg)$$
The right-hand side is nothing but
$$mathbb{E}^{X_{sigma}} left( prod_{j=1}^n 1_{A_j}(X_{t_j}) right);$$
this follows using a very similar reasoning to that in the above calculation (essentially you can put $sigma=0$ and replace $mathbb{E}$ by $mathbb{E}^{X_sigma}$; if you prefer to do it by hands, then use the tower property to condition first on $mathcal{F}_{t_{n-1}}$ and use Markov, then condition on $mathcal{F}_{t_{n-2}}$ and so on.)
Answered by saz on November 1, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP