Another nice model beyond Black-Scholes-Merton (BSM) is Heston’s Stochastic Volatility model (Heston-SV; see Wikipedia or the original paper). The key difference to the BSM-model here is that the volatility is not assumed to be constant, but is itself modelled as a random process, where the instantaneous variance, is given by a Cox–Ingersoll–Ross (CIR) model.
A common representation as SDE is as follows:
Heston already describes a price formula in a semi-closed form that he developed using the characteristic functions. And fortunately, Rouah and Vainberg have already implemented this in VBA and presented it in her book. And luckily there is also Gnu-R code in the NMOF package from Enrico Schumann, which I’ve taken as the basis of my code, please see https://cran.r-project.org/web/packages/NMOF/index.html. I modified and extended this code to try other integration methods and to catch errors that occur during fitting to real prices (more on that in a later post). And this Gnu-R code was then the basis for the julia and Python code.
To test the accuracy, I used the results from the Finance press blog that contains reference prices calculated with Mathematica:
The parameters are S=100, T=1.0 respectively T=0.01, r=0.01 and q=0.02.
Unfortunately, the market parameters of the risk-neutral world used in the Heston SV model are not specified correctly, but the exact values can be found in Luigi Ballabioi’s Github repository, more specifically in the Quantlip test suite branch of the Heston-SV model.
There is also a comment on the Alan Lewis reference prices in the code:
And there the following market parameters of the risk-neutral world can be found:
Note that for the value of T=0.01, the value of v0 is set to 0.01.
I have now calculated this in 4 different ways:
- with Gnu R using the standard function integrate
- with Gnu R using the quadgr function from the pracma package
- with Python using the quad function from the SciPy package
- with julia using the quadgk function from the QuadGK package
I used the default settings and didn’t optimize anything or anything like that – but maybe that’s something for another post.
You can find the code and test data from the Finance press blog here. Please note that the directories in the code must be adjusted.
First the results from the Gnu R code with the integrate-function:
Next the results from the Gnu R code with the quadgr-function from the pracma-package:
Next the results from the Python code with the quad-function from the SciPy-package:
And last the results from the julia code with the quadgk-function from the QuadGK-package:
It can be seen that the values for the longer term are acceptable, and that the relative errors for the shorter terms become quite large, especially for the values of the out-of-the-money calls.
What I find interesting is the negative time value on the call with duration 0.01 and strikes 90 and 95 (better seen on the Finance Press blog or in the screenshot below) – the calculated value is (minimal) smaller than the intrinsic value. I’m not aware of any publication that addresses this and suspect that this is a numerical problem.
Finally, an overview of the runtimes. To do this, I have done the 10 calculations 100 times each (i.e. 1000 calculations each), and measured the runtime in seconds. Then I repeated this over 100 times for each script.
As an average I got the following values:
|Gnu R with integrate||1.929|
|Gnu R with quadgr||106.592|
|julia with quadgk||3.048|
|python with quad||35.680|
And a boxplot looks like this: