Quantitative Finance Asked on October 27, 2021
I know there’s the book by the late Mark Joshi and there is a lot of content on the internet. I thought it could be beneficial to additionally start a thread here where we could all share the most interesting interview questions in Quant finance that we have encountered (i.e. a community wiki question: each answer should include one interview question (ideally with an answer): similar to "Good quant finance jokes").
Even if there might be some duplication with other resources, perhaps the added benefit of this thread would be:
The thread will reflect the questions that are "currently" in fashion
It might add value to the quant.stackexchange website as a resource for Quants and aspiring Quants
Happy to receive constructive criticism, if others don’t feel this is a good idea.
Background
Consider the affine Linear Gauss Markov (LGM) model for Interest Rates, characterized by a single-factor state variable $x_t$ with normal dynamics...
begin{align}
text{d}x_t&=sigma(t)text{d}W_t
end{align}
... specified in a measure under which the price process $N_t$:
begin{align}
N_t&:=frac{1}{P(0,t)}e^{H(t)x_t+a(t)},
end{align}
is a valid numéraire, where $P$ is the price of a zero-coupon bond, while $H(t)$ and $a(t)$ are two deterministic functions.
Question
Determine the structure of the function $a(t)$ to ensure the LGM model is arbitrage-free.
Answer
We know a model is arbitrage-free if and only if there exists an equivalent martingale measure (EMM), namely a probability measure such that the price of a traded asset is equal to the conditional expectation of its discounted cash flows. The basic asset in any rate model is the zero-coupon bond, which pays $\$1$ at expiry. Hence our LGM model must satisfy:
$$P(0,t)=Eleft(frac{1}{N_t}right)$$
Per the definition of $N_t$, the equivalent condition is:
$$Eleft(e^{-H(t)x_t-a(t)}right)=1tag{1}$$
The state variable $x_t$ is normally distributed, with zero mean and total variance up to $t$ equal to:
$$Sigma(t):=int_0^tsigma^2(u)text{d}u$$
Expectation $(1)$ can be explicitly calculated, for example by invoking the Laplace transform of a normal variable, and we get:
$$boxed{a(t) = frac{1}{2}H^2(t)Sigma(t)}$$
Interestingly, we note that compared to the Hull-White parameterisation, where the calibrated parameter needs to be updated whenever the curve changes to remain arbitrage-free (e.g. see the specification of the function $theta(t)$ in this answer), the LGM model is arbitrage-free by design provided we set the function $a(t)$ to be equal to the expression above.
Answered by Daneel Olivaw on October 27, 2021
Here is one that I got a long time ago in a quant interview:
Question: If $x = { x_1, x_2, cdots, x_n }$ are i.i.d. draws from a random variable $X sim {mathbb U}(0,1)$, calculate
begin{align} {mathbb E}[ ; max(x) - min(x) ; ] end{align}
Answer: I've got two fun solutions to this problem, by CDF and by Integration:
As expectation is a linear operator, we can re-write the desired quantity as the sum of two expectations begin{equation} label{minMaxUniform} {mathbb E}[ ; max(x) ; ] - {mathbb E}[ ; min(x) ; ] end{equation}
Since $X sim {mathbb U}(0,1)$ is symmetical around 0.5, these must be related by begin{equation} {mathbb E}[ ; max(x) ; ] = 1 - {mathbb E}[ ; min(x) ; ] end{equation} and we can express the desired expectation in terms of a single quantity begin{equation} 2 times {mathbb E}[ ; max(x) ; ] - 1 end{equation}
To calculate the expectation of the maximum of $n$ draws from $X$, let us consider $max(x)$ as its own random variable, and calculate its probability distribution, $P( max(x) = k )$ for $0 leq k leq 1$.
The probability that $P( max(x) leq k )$ is simply the probability that all draws $x_i$ are less than or equal to k, $P( x_i leq k ; forall ; i in n )$ - and since each draw is independent, we can re-express this as a product of independent terms begin{align} P( max(x) leq k ) &= P( x_i leq k ; forall ; i in n )\ &= prod_{i=1}^n P( x_i leq k )\ &= k^n end{align}
$P( max(x) leq k )$ is the cdf of $max(x)$, and we can use the well-known expression to calculate its pdf begin{align} P( max(x) = k ) &= {frac partial {partial k}} P( max(x) leq k )\ &= n cdot k^{n-1} end{align}
Having calculated the pdf of $max(x)$, we can calculate its expectation in the usual way begin{align} {mathbb E}[ ; max(x) ; ] &= int_{k=0}^{1} p( max(x) = k ) cdot k cdot dk\ &= int_{0}^{1} n cdot k^{n-1} cdot k cdot dk\ &= left[ {frac n {n+1}} k^{n+1} right]^1_0\ &= {frac n {n+1}} end{align}
Putting this all together, begin{align} {mathbb E}[ ; max(x) - min(x) ; ] &= {mathbb E}[ ; max(x) ; ] - {mathbb E}[ ; min(x) ; ]\ &= 2 times {mathbb E}[ ; max(x) ; ] - 1\ &= {frac {2n} {n+1}} - 1\ &= {frac {n-1} {n+1}} end{align} which is the answer
An alternative method to calculate ${mathbb E}[ ; max(x) ; ]$ is to integrate over each $x_i$. By symmetry, the probability of any of $n$ variables $x_i$ being the maximum is ${frac 1 n}$, so we integrate over the region in the $n$-dimensional space for which $x_1$ is the maximum and multiply by $n$ begin{align} {mathbb E}[ ; max(x) ; ] &= Bigl( int_0^1 Bigr)^{n} max(x) prod_{i=1}^n dx_i\ &= n cdot int_{x_1=0}^1 x_1 Bigl( int_0^{x_1} Bigr)^{n-1} prod_{i=1}^n dx_i\ &= n cdot int_{x_1=0}^1 x_1 prod_{i=1}^n Bigl( left[ x_i right]^{x_1}_0 Bigr)^{n-1} dx_1\ &= n cdot int_{x_1=0}^1 x_1^n cdot dx_1\ &= n cdot left[ {frac 1 {n+1}} x_1^{n+1}right]_0^1\ &= {frac n {n+1}} end{align}
And so using the logic from the final step of the earlier solution,
begin{align} {mathbb E}[ ; max(x) - min(x) ; ] &= 2 times {mathbb E}[ ; max(x) ; ] - 1\ &= {frac {2n} {n+1}} - 1\ &= {frac {n-1} {n+1}} end{align}
Answered by StackG on October 27, 2021
Question: A contract pays $$ P(T,T+tau) - K$$ at $T$, where $K$ is fixed and $P(cdot,S)$ is the price of a $S$-maturity zero-coupon bond (ZCB).
What is $K$ for which the contract's time $t$ price is null?
Answer:
Replication pricing:
At time $t$, we go long one $T+tau$-maturity ZCB and short $ P(t,T)^{-1}P(t,T+tau)$ $T$-maturity ZCB's.
Time $t$ cost of this position is $0$ as:
$$ (-1)cdot P(t,T+tau) + P(t,T)^{-1}P(t,T+tau)cdot P(t,T) = 0. $$
At time $T$, as the shorted bond matures, we have a flow of $$ - P(t,T)^{-1}P(t,T+tau). $$
But we are also expecting $1$ dollar flow at $T+tau$, whose price at time $T$ is:
$$ P(T,T+tau). $$
Hence, the $t$ price of payout (at time $T$)
$$ P(T,T+tau) - P(t,T)^{-1}P(t,T+tau) $$
is $0$. This is of course exactly our contract with
$$ K = P(t,T)^{-1}P(t,T+tau). $$
Pricing under $T$-forward measure:
$$V_t = P(t,T)mathbf{E}^{T}_t[P(T,T+tau) - K]$$
Setting $V_t$ to $0$ implies:
$$K = mathbf{E}^{T}_t[P(T,T+tau)]$$
As $P(t,T+tau)$ is a traded asset, under $T$-forward measure, process $$ left(P(t,T)^{-1} P(t,T+tau)right)_{tgeq 0}$$ is a martingale, which leads to: $$mathbf{E}^{T}_t[P(T,T)^{-1} P(T,T+tau)] = P(t,T)^{-1} P(t,T+tau).$$ Due to $P(T,T)=1$, we have:
$$K = mathbf{E}^{T}_t[P(T,T+tau)] = P(t,T)^{-1}P(t,T+tau)$$
Pricing under money market account measure:
$$V_t = beta_tmathbf{E}_t[beta_T^{-1} (P(T,T+tau) - K)]$$
Setting $V_t$ to $0$ implies:
$$K = mathbf{E}_t[beta_T^{-1}]^{-1}mathbf{E}_t[beta_T^{-1} P(T,T+tau)]$$
$$ = P(t,T)mathbf{E}_tleft[beta_T^{-1} mathbf{E}_T[beta_T beta_{T+tau}^{-1} ] right] $$
$$ = P(t,T)^{-1}mathbf{E}_tleft[ mathbf{E}_T[ beta_{T+tau}^{-1} ] right] $$
$$ = P(t,T)^{-1}mathbf{E}_tleft[ beta_{T+tau}^{-1} right] $$
$$ = P(t,T)^{-1}P(t,T+tau), $$
using tower property of conditional expectations in the penultimate equality.
(Note: not necessarily a recent question, but expected to be asked - I flunked the replication pricing part that the interviewer was obviously enamored with; this is covered by both Brigo/Mercurio's book, in the context of FRA pricing, and by Andersen/Piterbarg's book, forward bond price.)
Answered by ir7 on October 27, 2021
To start the thread, let me share the most recent interview question I have been asked:
Question: Denote standard Brownian motion as $W(t)$. Compute the probability that:
$$ mathbb{P}(W(1)>0 cap W(2)>0) $$
Answer: Using the independence of increments property, we have $W(2) = W(2-1) + W(1)$. Denote $W(2-1)$ as $Y$ and $W(1)$ as $X$. Then:
$$ mathbb{P}(W(1)>0 cap W(2-1)+W(1)>0)=mathbb{P}(X>0 cap Y+X)>0)=mathbb{P}(X>0 cap Y>-X) $$
By definition of Brownian motion, the independent increments are jointly Normally distributed. So $X$ and $Y$ are jointly normal with density $f_{X,Y}(u,v)$. We can write:
$$mathbb{P}(X>0 cap Y>-X)=int_{u=0}^{u=infty}int_{v=-u}^{v=infty}f_{X,Y}(u,v)dv du$$
The final step is to draw the domain of the double integral: $X>0$ means we're interested in the right-hand side of the cartesian $X,Y$ plot. Then with $Y>-X$, this further carves out the area below the line $Y=(-X)$ on the right-hand side of the $X,Y$ plot: i.e. we cut the "bottom $1/4$" of the right-hand half. So we are left with $3/4$ of $1/2$ of the $X,Y$ domain, which is $3/8$. Since the jointly normal PDF is a symmetrical cone centred on $x=0, y=0$, the double integral is actually equal to $3/8$ by symmetry.
Answered by Jan Stuller on October 27, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP