Quantitative Finance Asked by therealcode on November 27, 2020
Assuming I have a stochastic volatility model for an asset, if I wanted to use it for pricing I would proceed in the following way:
Is this the right approach? I guess for actual pricing one should compute like 10’000 sample paths and then take the average before building the surface, right? Finally, isn’t implied volatility considered in a risk-neutral world? If so, is there a problem if my asset’s dynamics are written under the real world measure?
If i understand this correctly, you want to be able to infer a future volatility surface, given the current simulation parameters you have.
What you're essentially trying to do it include the modelling of forward vol/skew in your MC.
Getting the forward vol surface vaguely correct is quite important to price some types of derivative - i.e. anything that has exposure to forward volatility, where this includes the obvious case of forward start options (i.e. in their vanilla form, or when bundled together in cliquets), and products with path dependency/contingent claims (i.e. autocallables, daily/continuously observed barriers), and potentially others - though i feel that they can all be described as those two (and really, contingent claims are just a specific case of forward vol/skew).
This means, fortunately for you, that it has been looked at before. One of the properties of stochastic volatility models (over local volatility models) is that they're able to much better capture the forward volatility properties. And of course you can use a stochastic vol model which has a local vol aspect to it to give you more flexibility too.
You can go farther too and include mean reversion terms on the spot price, time dependent or even stochastic correlation, whatever you want - it just adds more richness to the model. In order to calibrate these more esoteric aspects though you need instruments which depend on them included in the calibration (or the desk trading the underlying will mark the parameters as the see them in the market, and then the calibration will run with them held static and effectively work around them).
So, if we gloss over the calibration of whatever model you've selected, the question becomes "how do i extract the implied volatilty surface from some simulated point in the future?". The answer to this has effectively already been given to you too, in American Montecarlo. I'll first describe the naïve (and expensive) way to do it, and then the approximations you can use to speed it up.
The naïve method is that you difuse your paths as normal, and then when you get to a point where you need the forward volatility surface at that point, you spin up a new montecarlo where your starting points use the current state of your outer MC - you then diffuse n paths up to the maximum maturity you need your new vol surface at, at which point you take all your inner MC paths, and use them to price a surface of options, from which you imply volatilities (using forward implied by your MC paths (i.e. the mean at each maturity)). These implied volatilities are you implied vol surface at this point in the future, conditional on the various other model parameters also matching the internal outer MC state. This approach is completely self consistent - the integral of all of the implied PDfs (probability weighted correctly) will match the input implied distribution/volatilities/calibration instrument prices. This is by construction, and works for any type of diffusion model you throw into the MC.
The downside is that it's extremely computationally expensive.
So what are the approximations you can use to make this more efficient? There are two main approaches taken in the Longstaff Schwartz american montecarlo method. Read ther paper, it's quite easy to follow and i would say an important read for anyone looking at related problems.
Essentially, instead of rerunning a montecarlo at each step, you sample the other paths that happen to have gone through the same point with the same observation variables (where by same, we mean they fall into the same buckets, where you choose the sizes of these buckets). These observation variables can be whatever you want - they can be the current spot and time variables, they can include the current value of the stochastic volatility path, they can be whatever you want, where you should pick observation variables important to the derivative you wish to price (i.e. you can add in whether or not a previous barrier has been breached if you like, it's up to you). If you pick more observation variables, then when you subsample the full set of paths to take those which match the current path, then you're going to have a smaller number of paths, which while they will be more relevant, will be noisier.
This will give you a fiarly noisey mini MC for each of the points wher you're looking at the future, so what you do is fit smooth functions to try and replicate the resulting forward variables as a function of the observation variables, you can then use these fitted functions so estimate the future based on the current locations.
Sorry it's a bit wordy, but hopefully that, and reading the LS paper will clear things up for you.
Correct answer by will on November 27, 2020
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP