 Research
 Open Access
 Published:
Uncertainty quantification and Heston model
Journal of Mathematics in Industry volume 8, Article number: 5 (2018)
Abstract
In this paper, we study the impact of the parameters involved in Heston model by means of Uncertainty Quantification. The Stochastic Collocation Method already used for example in computational fluid dynamics, has been applied throughout this work in order to compute the propagation of the uncertainty from the parameters of the model to the output. The wellknown Heston model is considered and involved parameters in the Feller condition are taken as uncertain due to their important influence on the output. Numerical results where the Feller condition is satisfied or not are shown as well as a numerical example with real market data.
Introduction
In general, stochastic differential equations governing the prices of certain financial products do not always have an analytical solution. The required numerical approximations do not contain enough information about the reliability of the output of a certain mathematical model. Furthermore, randomness can present different inputs, such as different model parameters, which consequently introduces uncertainty into the solution of the model. These issues make mathematical models in computational finance a good framework to apply Uncertainty Quantification. Stochastic volatility models are the focus of this work. Our objective is to study the propagation of the uncertainty from the input towards its solution.
In many financial markets as equity and interest rate, the socalled volatility smile appears as an important feature of pricing models, being the reflection of the not constant behaviour in market volatilities for options at different strike prices with a single expiration date. It is wellknown that the classical Black–Scholes framework is not able to reproduce the volatility smile observed in the market, due to the constant volatility assumed. Stochastic volatility models are an important breakthrough in financial research to avoid this drawback. Among these more sophisticated models, the Heston model [5] has become one of the most used by practitioners. It is highly efficient due to its easy computational implementation by means of numerical methods involving Fourier transforms [2, 3] and it will be the focus of this work.
The Heston model describes the dynamics of the stock price and the variance based on a set of parameters which can be uncertain. The variability of the values of the different parameters can result in variations of the prices with more impact than perhaps expected. Despite several other sources of uncertainty, we are concerned here with uncertainty in model parameters. The main objective is to compute the statistical moments of the output of interest when the response surface is approximated locally by a polynomial function.
As long as the Heston model contains open parameters, a calibration procedure is important to fit these parameters to market data. That is, prices obtained with the corresponding mathematical model (in this case, Heston) should match the ones observed in the market. More precisely, if we have a set of dates and market prices, a suitable calibration method will provide a set of parameters at each date considered. Once we have these data, their distributions can be established. This will make pricing more reliable because we consider a distribution for the parameters. Thus, a general vision on their uncertainty as well as their impact in the pricing could be obtained. Apart from the calibration procedure, hedging plays an important role in finance. So the possibility of having an insight in the impact of uncertain parameters on the Greeks of the Heston model should also be interesting for practitioners.
This paper is organized as follows: Firstly, the Heston model is briefly introduced. Next, the mathematical framework for Uncertainty Quantification is introduced as well as the method considered for computing the propagation of uncertainty, the Stochastic Collocation Method. Finally, some numerical results for tests where the Feller condition is satisfied or not and a real market data example will be shown.
Methods
Throughout this section, a brief description of the Heston model will be introduced [5]. Then, some aspects and concepts about Uncertainty Quantification (UQ) will be described following the notation used by [6, 9]. In order to study the propagation of the uncertainty from the model parameters to the output, the nonintrusive Stochastic Collocation Method [12] is used as it has the attractive advantage of treating an existing solver as a black box. Moreover, it can be easily implemented and leads to the solution of uncoupled deterministic problems, thus being an efficient tool when the number of input random variables is small.
Heston model
Considering the Heston model, the dynamics of the stock price as well as the variance process, under the riskneutral measure, are given by the following stochastic differential equations:
Here, \(S(t)\) is the underlying stock price, \(\upsilon (t)\) is the variance process, \(W_{x}\) and \(W_{\upsilon}\) are two standard Brownian motion processes with correlation \(\rho_{x,\upsilon}\), where \(\kappa\geq 0\), \(\bar{\upsilon}\geq 0\) and \(\gamma\geq 0\). The parameter κ controls the speed of mean reversion of the volatility, γ is the volatility of the volatility, and ῡ is the longterm of the variance process. An important feature of the Heston model is the stochastic volatility which allows to reproduce the implied volatility smile present in many financial markets. Every parameter has a specific effect on the implied volatility curve generated by the dynamics so it is interesting to study UQ for the implied volatility as well as for the prices of certain financial products.
If \(2\kappa\bar{\upsilon}>\gamma^{2}\), the socalled Feller condition is satisfied and the positivity of \(\upsilon(t)\) is guaranteed, otherwise it may reach zero. The Feller condition is difficult to satisfy in practice, so in this work the cases with and without Feller conditions will be considered. This will allow us to take into account different financial scenarios. The set of random parameters where UQ is applied to is \(\xi=\{\kappa, \bar{\upsilon}, \gamma, \upsilon_{0}, \rho_{x,\upsilon}\}\). Firstly, we will show some tests where parameters κ and γ are considered and then, the whole set of parameters will be taken into account for a numerical test where real market data are considered.
Stochastic Collocation Method
First of all, we assume a complete probability space \((\Omega,\mathcal{F}, \mathbb{Q})\) and finite time horizon \([0,T]\) where Ω is the set of all possible realizations of the stochastic process between 0 and T. The information structure is represented by an augmented filtration \(\bar{\mathcal{F}_{t}}\), \(t\in[0,T]\) with \(\mathcal{F}_{T}\) the sigma algebra of distinguishable events at time T, and \(\mathbb{Q}\) is the riskneutral probability measure on elements of \(\mathcal{F}\).
Next, a set \(\omega=\{\omega_{1},\ldots,\omega_{n_{\xi}}\}\in\Omega\subset\mathbb{R}^{n_{\xi}}\) of events in the complete probability space and a set \(\boldsymbol{\xi}=\{\xi^{(1)},\ldots,\xi^{({n_{\xi}})}\}\) of parameters involved in the mathematical model are considered with parameter space \(\Xi\subset\mathbb{R}^{n_{\xi}}\). For every sample \(\boldsymbol{\xi}(\omega)\) the output should be computed by a suitable numerical solver. In general, a nonintrusive uncertainty quantification method consists of a sampling method and a standard interpolation method such as Lagrange interpolation. The sampling method selects the sampling points and returns the sampled values, being computed by solving the problem considered for the realization of the random parameter vector. It is interesting that the computational cost of the sample interpolation in UQ is almost depreciable compared to the cost of expensive existing solvers.
Let us suppose that our problem can be formulated as follows
where \(\mathcal{L}\) is a linear or nonlinear differential operator depending on random parameters and x and t are the spatial and temporal coordinates, respectively. Moreover, appropriate initial and boundary conditions are considered. Next, \(u(\boldsymbol{x},t,\boldsymbol{\xi})\) is considered as the output of interest such as the price of a financial product with maturity T, which does not only depend on the spatial coordinates \(\boldsymbol{x}\in X\subset R^{n_{X}}\) with \(n_{X}\in\{1,2,3\}\) and time \(t\in \mathbb{R}^{+}\), but also on a vector of \(n_{\xi}\) random parameters \(\boldsymbol{\xi}=\{\xi^{(1)},\ldots,\xi^{(n_{\xi})}\}\in\Xi\). The random parameters ξ have a known joint probability density function \(f_{\xi}(\boldsymbol{\xi})\) with finite variance. Each uncertain input parameter corresponds to one additional random parameter \(\xi^{(i)}\) and to a dimension in the associated probability space Ω.
The stochastic collocation method is based on onedimensional Lagrange polynomial interpolation of the output \(u(\boldsymbol{x},t,\xi_{j})\) at N Gauss quadrature points \(\xi_{j}\) in Ξ
where \(u_{j}(\boldsymbol{x},t)=u(\boldsymbol{x},t,\xi_{j})\) are the solutions of the governing equations for \(\xi_{j}\), and \(L_{j}(\xi)\) are the Lagrange polynomials of degree \(N1\) for which holds \(L_{j}(\xi_{k})=\delta_{jk}\). Then the main objectives of UQ are the probability distribution and the resulting approximation of the statistical moments of the output distribution for \(u(\mathbf{x},t,\xi_{j})\), which are given by
with the quadrature weights \(z_{j}\). The stochastic collocation method can be extended to multiple inputs using a tensor product of quadrature points
However, if the number of uncertain parameters increases, approximations based on tensor product grids suffer from the curse of dimensionality since the number of collocation points in a tensor grid grows exponentially. In these cases, we can consider sparse tensor product spaces as first proposed by Smolyak [10]. More precisely, the Smolyak sparse grid stochastic collocation method for the approximation of statistical quantities is used to reduce the exponential increase of the number of tensor product quadrature points with the dimensionality \(n_{\xi}\) as follows, see [12],
with w the sparse grid level, \(\mathbf{w}=w_{1}+\cdots+w_{n_{\xi}}\), and
for \(w>0\). The sparse tensor product grids are here built upon Clenshaw–Curtis abscissas as they are particularly efficient since the resulting sparse grids are nested. This hierarchical sampling property allows for reusing the samples when increasing the order if a more accurate response of the model is required. Moreover, we can find in the literature different works on the fasterconverging accuracy of Clenshaw–Curtis points than the obtained with Gaussian points ([11]).
Numerical results and discussion
In the literature several numerical methods have been used to solve the mathematical models stated for pricing financial products such as finite differences [1], finite elements [8] or Monte Carlo [4]. We consider a class of highly efficient numerical pricing techniques, the class of Fourierbased numerical integration methods, the COSmethod being one of them. Its main idea is to replace the probability density function appearing in the riskneutral formula by a Fourier cosine series expansion, which has a closedform relation by means of the characteristic function. Some examples about the performance of the method as well as more details are shown in [2] and [3].
If we focus on pricing a financial instrument, such as a put option, for every sample of the parameters involved in the mathematical model governing the price of the product, we will obtain a value given by the chosen numerical solver. Then, the propagation of the uncertainty from the model parameters to the output is dominated by the computational cost of the numerical solver. Due to this, it is important to choose an efficient numerical method with a low computational cost. In this work, put options have been considered (Table 1) but exotic financial products could also be taken such as variance swaps or cliquet options.
Tests
Among the whole set of possible random parameters, κ, ῡ and γ have an important role as they are involved in the Feller condition. As a first approach and in order to simplify the results to visualize some conclusions, only parameters κ and γ have been considered. Thus, we obtain a twodimensional problem for UQ. Tests where Feller condition is satisfied or not have been carried out. The numerical results shown in this section will give us an insight in the impact of the variability of κ and γ on the output of the stated problem. All the computations have been done for time \(\tau=2\) and the deterministic values for the parameters can be seen in Table 1.
The probability distribution of the uncertain parameters has to be fixed as the Stochastic Collocation method is based on a transformation to an artificial space. Then the collocation points are computed in this new stochastic space whose properties are already known and the probability of the solution is built with Lagrange interpolation. In previous works like [7] the exponential convergence of the Stochastic Collocation method is shown for uniformly distributed uncertain parameters so in this work, uniform distribution has been chosen to generate the random parameters.
The results have been computed using nine interpolation points in each direction of the domain, that is eightyone collocations points (Fig. 1(a)). However, having in mind a higher dimensional problem, a Smolyak approximation has been considered in order to reduce the computational time so the number of points is reduced to twentynine (Fig. 1(b)). Although in this case the improvement is not very noticeable, an important time reduction is expected for higher dimensions, because an increase of the number of uncertain parameters turns into an increase of the number of collocation points where the approximate solution is computed by the COSmethod.
Test 1: Feller condition satisfied
The first numerical experiment carried out is when Feller condition is satisfied. Values for implied volatility and put option prices have been computed using the COSmethod for every sample of κ and γ where κ will vary in \([5,10]\) and γ in \([0.1, 0.3]\). In Fig. 2(a)–(b) the highly satisfactory performance of the interpolation method in this case can be seen. Nine points in each direction have been taken and put prices and implied volatility values at these points have been computed using the COSmethod. Then, in order to compute the values in the whole domain, the Lagrange interpolation has been used. In Fig. 2(c)–(d) the standard deviations for the implied volatility and put prices are depicted.
Test 2: Feller condition not satisfied
In the numerical results to follow, we will focus on the case where the Feller condition is not satisfied, for which κ will vary in \([0.5,1.0]\) and γ in \([0.2, 0.4]\). Again, values for implied volatility and put option prices have been computed for every sample by the COSmethod. Analogously to the previous test, the fine performance of the method can be observed in Fig. 3(a)–(b) as well as the standard deviations for the implied volatility and put prices in Fig. 3(c)–(d).
Test 3: mixed Feller condition
We are also interested in cases where a mixed Feller condition holds. In the numerical results to follow, κ will vary in \([0.4,11.0]\) and γ in \([0.05, 0.5]\). Thus, the Feller condition can be either satisfied or not at sample points considered in these domains. In this case where the Feller condition is satisfied at some points of the domain considered for κ and γ, we see a highly satisfactory performance of the interpolation method in Fig. 4(a)–(b) as in previous subsections. It is remarkable that the behaviour is completely different from the ones shown in previous cases. This should be related to the strestch range of values of the parameters. In Fig. 4(c)–(d) the standard deviations for the implied volatility and put prices are shown.
After having an insight in the three cases we can describe some conclusions. If the Feller condition is satisfied, the implied volatility and the put price increase with increasing κ and decreasing γ. The response surface is close to linear, despite the large parameter domain. When the Feller condition is not satisfied, the response is much more nonlinear. For the mixed Feller condition case, the response is even more nonlinear being partly caused by the largest parameter ranges used. For the cases with the Feller condition satisfied, and not satisfied, the standard deviation converges quickly whereas the convergence is slower for the mixed Feller condition case, because of the nonlinearity of the response. The standard deviation of the nonlinear solution for the case Feller not satisfied is smaller than for the case Feller satisfied, being the input uncertainty in the former case smaller. However, the standard deviation for the mixed case is highest.
Numerical test with real market data
A numerical test for the whole set of parameters is shown. These values are taken from real market data and some preprocessing is needed in order to apply UQ to this particular case. More precisely, EuroStoxx50 option prices spanning from the 19/09/2006 to the 22/04/2008 have been analyzed. Note this set of prices includes times of the market crisis. In Fig. 5 spot values of the Eurostoxx50 for the period the model has been calibrated are shown. The calibration based on optimization over the market data results in the set of values of the parameters and the corresponding distributions. We will deal with a fivedimensional problem as five parameters are considered so the dimensionality of the problem is increased with respect to the numerical test showed in the previous section. The projections and the marginal probability density functions (PDFs) of the fivedimensional input is given in Fig. 6. It shows that the input parameters are correlated and that the densities can be multimodal. These effects are taken into account by evaluating the Lagrange polynomial interpolation at the set of points of Fig. 6.
The computational cost is about 8 minutes as the Smolyak approximation is considered for the postprocessing where stochastic collocation method is involved. Throughout this test the put option deterministic parameters are the same as the ones of the previous analytical tests collected in Table 1.
The cumulative probability distribution function (CDF) of the put option prices is computed by a Monte Carlo simulation of the data points and by the Stochastic Collocation method as we can see in Fig. 7 to 10. In Fig. 7 and 8, the approximations for level \(\omega=\{1,2\}\) are shown. Both the number of points and the accuracy of the tensor product grid approximation increases faster with ω. In Fig. 9 the approximations for the full tensor for level \(\omega = 2\) and the sparse grid for level \(\omega = 5\) are shown to compare the results with a comparable number of points (2433 points and 3125 points, respectively). In Fig. 10, the sparse grid approximations up to level \(\omega=5\) is shown to compare the results to the tensor grid with \(\omega=2\).
The convergence behavior is compared in Tables 2 and 3 with the Monte Carlo solution as function of the level ω and the number of points. The mean and standard deviation are integral quantities and therefore the mean and standard deviation errors can converge nonmonotonically due to cancellation of errors. The \(L_{1}\) error is added as a stronger convergence measure, which shows a monotonic convergence in this case.
The conclusion of this section is that the accuracy of the sparse grids interpolation technique is comparable to that of the full tensor grids as function of the number of points. The additional advantage of sparse grids is that the number of points grows more slowly with the level than the full tensor grids, such that more convergence information and smaller point sets are available. Levels \(\omega=1\) and \(\omega=2\) of the full tensor should therefore be compared to the sparse grids on levels \(\omega=3\) and \(\omega=5\) with approximately the same number of points, respectively. In this case, the number of points for most of the levels is larger than the 84 data points in the Monte Carlo simulation. However, the number of computations in the Stochastic Collocation method is independent of the number of data points. Stochastic Collocation could therefore be faster than Monte Carlo simulation at an acceptable accuracy, in cases in which a larger set of market data points is available.
The good performance of the method is seen as both cumulative density functions are close one to each other.
Conclusions
The impact of the uncertainty provided by the randomness of the parameters in the Heston model is studied by means of Uncertainty Quantification. In particular, the Stochastic Collocation Method is used to study the propagation of the uncertainty from the input of the model to the solution.
It is challenging to apply techniques already used in fluid dynamics to computational finance and on the other hand, the good performance of this kind of methods is stated through the tests considered in this paper. Firstly, the analytical tests show an insight of the impact of the variability of κ and γ on the output of the problem. If the Feller condition is satisfied, the response surface is close to linear but if Feller condition is not satisfied, the response is much more nonlinear. For mixed Feller condition, due to the largest parameter range, the response is even more nonlinear and the standard deviation is largest than for the tests where Feller condition is satisfied.
Then, a real market data test is considered where the number of model parameters increases turning it into a fivedimensional problem for Uncertainty Quantification. In consequence, despite suffering from the curse of dimensionality as the number of collocation points also increases, Smolyak sparse grid Stochastic Collocation Method is used to reduce the computational cost of the method.
Having in mind the importance of the calibration and hedging procedures, the study of the impact of uncertain parameters on the Greeks and the distribution of the model parameters would make pricing more reliable. Furthermore, it would be interesting to extend these studies to exotic products such as variance swaps and cliquet options.
Abbreviations
 Implied vol.:

Implied volatility
 Std. Dev.:

Standard Deviation
 UQ:

Uncertainty Quantification
References
 1.
Achdou I, Pironneau O. Computational methods for option pricing. Philadelphia: SIAM; 2005.
 2.
Fang F, Oosterlee CW. A novel pricing method for European options based on Fouriercosine series expansions. SIAM J Sci Comput. 2008;31:826–48.
 3.
Fang F, Oosterlee CW. Pricing earlyexercise and discretebarrier options by Fouriercosine series expansions. Numer Math. 2009;114:27–62.
 4.
Glasserman P. Monte Carlo methods in financial engineering. Berlin: Springer; 2003.
 5.
Heston S. A closedform solution for options with stochastic volatility with applications to bond and currency options. Rev Financ Stud. 1993;6:327–43.
 6.
Loeven GJA, Witteveen Jeroen AS, Bijl H. Probabilistic collocation: an efficient nonintrusive approach for arbitrarily distributed parametric uncertainties. In: 45th AIAA Aerospace Sciences Meeting and Exhibit. Reno, Nevada.
 7.
Loeven GJA, Witteveen Jeroen AS, Bijl H. (Student paper) Efficient Uncertainty Quantification in Computational FluidStructure Interaction. In: Proceedings of the 8th AIAA NonDeterministic Approaches Conference. Newport. AIAA paper 20061634.
 8.
Marcozzi MD. On the valuation of Asian options by variational methods. SIAM J Sci Comput. 2003;24:1124–40.
 9.
Nobile F, Tempone R, Webster CG. A sparse grid stochastic collocation method for partial differential equations with random input data. SIAM J Sci Numer Anal. 2008;46:2309–45.
 10.
Smolyak SA. Quadrature and interpolation formulas for tensor products of certain classes of functions. Dokl Akad Nauk SSSR. 1963;4:240–3.
 11.
Trefethen LN. Is Gauss quadrature better than Clenshaw–Curtis? SIAM Rev. 2008;50:67–87.
 12.
Xiu D, Hesthaven JS. High order collocation methods for differential equations with random inputs. SIAM J Sci Comput. 2005;27:1118–39.
Acknowledgements
Not applicable.
Availability of data and materials
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
Funding
This paper has been ERCIM “Alain Bensoussan Fellowship Programme” and partially funded by MCINN (Project MTM2010–21135–C0201).
Author information
Affiliations
Contributions
All authors were involved in planning the work, drafting and writing the manuscript. Discussion and comments on the results and the manuscript as well as a critical revision have been made by all the authors at all stages. All authors read and approved the final manuscript.
Corresponding author
Correspondence to María SuárezTaboada.
Ethics declarations
Competing interests
The author declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
SuárezTaboada, M., Witteveen, J.A.S., Grzelak, L.A. et al. Uncertainty quantification and Heston model. J.Math.Industry 8, 5 (2018). https://doi.org/10.1186/s1336201800472
Received:
Accepted:
Published:
Keywords
 Stochastic Collocation
 Uncertainty quantification
 Implied volatility
 Heston model