Skip to main content

Uncertainty quantification in tsunami modeling using multi-level Monte Carlo finite volume method

Abstract

Shallow-water type models are commonly used in tsunami simulations. These models contain uncertain parameters like the ratio of densities of layers, friction coefficient, fault deformation, etc. These parameters are modeled statistically and quantifying the resulting solution uncertainty (UQ) is a crucial task in geophysics. We propose a paradigm for UQ that combines the recently developed path-conservative spatial discretizations efficiently implemented on cluster of GPUs, with the recently developed Multi-Level Monte Carlo (MLMC) statistical sampling method and provides a fast, accurate and computationally efficient framework to compute statistical quantities of interest. Numerical experiments, including realistic simulations in real bathymetries, are presented to illustrate the robustness of the proposed UQ algorithm.

1 Introduction

A tsunami is a series of powerful water waves generated by different mechanism as earthquakes, volcanic eruptions, underwater landslides as well as local landslides along the coast. As emphasized by the recent tragic events in March 2011 in Japan and in December 2004 in Indonesia, tsunamis may be extremely catastrophic: they are able to destroy buildings, roads and generally the infrastructure can be seriously affected. But the most tragic part is that tsunamis can lead to the loss of human lives. A deep knowledge of tsunamis is required in order to predict the maximum runups and rundowns, and also to give early warning messages to the regions that may be affected.

Since the most common sources for tsunamis are earthquakes, earthquake-generated tsunamis have been extensively investigated. Landslide-generated tsunamis have been much less studied and the existing knowledge about them is more limited. They are characterized by relatively short periods, compared to the earthquake-generated ones, and they do not travel as long distances as the earthquake-generated tsunamis do. Therefore, one of their characteristics is that their whole life cycle takes place near the source. Nevertheless, they can reach high amplitudes and can also become extremely harmful (see [1, 2]).

The numerical simulation of a tsunami has three stages, i.e., generation, propagation and inundation. In the generation stage of earthquake-generated tsunamis, Okada’s finite fault deformation model (see [3]) is widely used as the initial method to predict the initial sea surface displacement of a tsunami. This method assumes that an earthquake can be regarded as the rupture of a single fault plane. This fault is described by a series of physical parameters, comprising dip angle, strike angle, rake angle, fault width, fault length, and fault depth. Okada’s vertical displacement is applied to generate tsunami wave with initialized sea-surface elevation instantaneously, or drive the model at a specific rupture time (e.g., see [4]). Recently, tsunami-wave generations independent of the Okada’s assumption are also being developed and evaluated, in which a 3D finite element model is employed (see [5]).

In the propagation and inundation stage, two main types of governing equations are commonly employed: the Boussinesq type equations or the nonlinear shallow-water equations. In this work, earthquake-generated tsunamis are driven by the instantaneous sea surface disturbance derived from Okada’s finite fault model and the propagation and inundation stages are simulated with the use of 2D nonlinear shallow-water equations.

Landslides generated tsunamis are modeled here by the use of a nonlinear two-layer Savage-Hutter type model introduced in [6], that it is able to reproduce the generation of the tsunami by the impact of the landslide, the propagation of the tsunami waves and the inundation produced by those waves.

It is well known that solutions of both systems take the form of waves that propagate at a finite speed. Furthermore, the solutions might form discontinuities such as shocks, hydraulic jumps, etc., even when the initial data are smooth. Thus, it is customary to interpret the solutions of such nonlinear PDEs in the sense of distributions. There are innate difficulties in defining such weak solutions for systems that are not in the conservation form, due to the presence of geometrical source terms or non-conservative products, as it is the case here. For such systems, special theories such as those in [7] have been proposed. Moreover, weak solutions are not necessarily unique and further admissibility criteria need to be imposed in order to single out a physically relevant solution.

Various types of numerical methods have been designed to approximate these convection-dominated nonlinear hyperbolic PDEs efficiently. Methods such as finite volume, finite difference and discontinuous Galerkin finite element schemes are widely used. In particular, the approximation of non-conservative systems is quite involved as the right jump conditions across discontinuities need to be approximated [8]. An attractive framework to deal with such problems is the one of path conservative numerical schemes developed in [9].

In recent years, efficient implementation of such schemes have been carried out using Graphics Processing Units (GPUs). GPUs have proved to be a powerful accelerator for intensive scientific simulations. The high memory bandwidth and massive parallelism of these platforms make it possible to achieve dramatic speedups over a standard CPU in many applications [10, 11], and several programming toolkits and interfaces, such as NVIDIA CUDA [12] and Open Computing Language (OpenCL) [13], have shown a high effectiveness in the mapping of data parallel applications to GPUs [10, 14]. Currently most of the proposals to simulate shallow flows on a single GPU are based on the CUDA programming model. There are several proposals of finite volume CUDA solvers to simulate one-layer shallow water flows over structured regular meshes [15, 16] and for the two-layer shallow water system [17, 18]. Moreover, realistic tsunami simulations involve huge meshes, many time steps and possibly real time accurate predictions. These characteristics suggest to use a cluster of GPU-enhanced computers in order to scale the runtime reduction and overcome the memory limitations of a GPU-enhanced node by suitably distributing the data among the nodes, enabling us to simulate significantly larger realistic models. Most of the proposals to exploit GPU clusters in computational fluid dynamics (CFD) simulations use CUDA to program each GPU, and MPI [19] to implement interprocess communication, and they use non-blocking communication MPI functions to overlap the remote transfers with GPU computation [2024].

Numerical methods to approximate these nonlinear hyperbolic PDEs (or for that matter any PDE) require inputs such as the initial data, boundary conditions and coefficients in the fluxes, sources and friction terms of the PDE. These inputs need to be measured. Measurements are marked by uncertainty. For instance, let us consider tsunami modeling. In such problems, the initial conditions are typically estimated from a very uncertain measurement process: it is very difficult to estimate the exact fault deformation or the initial position and velocity of a landslide. This uncertainty in determining the inputs to the PDE is propagated into the solution. The calculation of solution uncertainty, given input uncertainty, falls under the rubric of uncertainty quantification (UQ). UQ for geophysical flows is vitally important for risk evaluation and hazard mitigation.

Although various approaches to modeling input uncertainty exist, the most popular framework models input uncertainty statistically in terms of random parameters and fields. The resulting PDE is a stochastic (random) PDE. The solution has to be sought for in a stochastic sense and statistical quantities such as the mean, the variance, higher moments, confidence intervals and the probability distribution function (pdfs) of the solution are the objects of interest.

The modeling and computation of solution statistics is highly non-trivial. Challenges include possibly large number of random variables (fields) to parametrize the uncertain input and the sheer computational challenge of evaluating statistical moments that might require a very large number of PDE solves. The challenges are particularly accentuated for hyperbolic and convection-dominated PDEs as the discontinuities in physical space such as shocks can propagate into stochastic space resulting in a loss of regularity of the underlying solution with respect to the random parameters. A very large number of degrees of freedom in the stochastic space might be needed to resolve such functions with possible singularities. See a recent review [25] for a detailed account of the challenges involved in UQ for hyperbolic problems.

Nevertheless, several numerical methods have been developed for UQ in hyperbolic PDEs. See for instance [2630] and the review [25] for details. Methods include the stochastic Galerkin methods based on generalized Polynomial Chaos (gPC), stochastic collocation methods and stochastic finite volume methods (SFVM). Stochastic Galerkin methods are based on expanding the sought for solution random field in terms of basis functions, orthogonal with respect to the underlying probability distribution and termed as gPC or generalized polynomial chaos. Projecting the resulting expansion into this orthonormal basis yields a (possibly very large) system of PDEs for the underlying coefficients. Moments of the solution random field can be readily obtained from the coefficients. Such an approach has been used for instance in [28] and references therein. However, this approach suffers from several deficiencies. Perhaps the biggest drawback of this approach in the context of nonlinear hyperbolic PDE lies in the fact that the resulting system of PDEs for the gPC coefficients is not necessarily hyperbolic and may not even be well-posed. A novel solution to this problem was provided in [27] where the authors proposed a gPC expansion of the solution random field in terms of the entropy variables. If the underlying nonlinear conservation law possesses a strictly convex entropy, then one can show that the resulting nonlinear system for gPC coefficients is hyperbolic. However, a large number of terms of this expansion might still be necessary for hyperbolic PDEs with low spatial and stochastic regularity leading to very computationally costly solution of a large system of PDEs for the gPC coefficients. Furthermore, this method is computationally intrusive i.e., completely new code has to written from scratch in order to compute the chaos coefficients and existing codes cannot be reused. Hence, it appears that the stochastic Galerkin method is only suitable for hyperbolic problems with a very low number of uncertain (stochastic) parameters (dimensions).

An alternative set of methods are of the stochastic collocation type [29, 30] and references therein, where the solution is sampled at a deterministic set of sample points in the entire stochastic space. These methods are non-intrusive. However, they are of limited utility as one needs sufficient regularity with respect to stochastic variables in order to employ tools such as sparse grids to keep the computational cost feasible. Unfortunately, solutions of uncertain nonlinear hyperbolic PDEs are not sufficiently regular. A possible alternative is the recently proposed stochastic finite volume method [31] and references therein. However, it is also limited to a low to moderate number of stochastic parameters.

Another class of methods are the so-called Monte Carlo (MC) methods in which the probability space is sampled, the underlying deterministic PDE is solved for each sample and the samples are combined to determine statistical information about the random field. Although non-intrusive, easy to code and to parallelize, MC methods converge at rate \(1/2\) as the number M of MC samples increases. The asymptotic convergence rate \(M^{-1/2}\) is non-improvable by the central limit theorem.

Therefore, MC methods require a large number of ‘samples’ (with each ‘sample’ involving the numerical solution of the underlying PDE with a given draw of parameter values) in order to ensure low statistical errors. This slow convergence entails high computational costs for MC type methods and makes them infeasible for computing uncertainty in complex shallow water flows. We refer to [32] for a detailed error and computational complexity analysis for the MC method in the context of scalar conservation laws. This slow convergence has inspired the development of Multi-Level Monte Carlo or MLMC methods [3335], in which one consider a nested sequence of space-time grids and draws different number of samples from each grid. In particular, very few samples are drawn from the finest grids (with the highest computational cost) and very large number of samples are drawn from the coarsest grids (with very low computational cost). This subtle balancing of stochastic error with spatio-temporal error, together with a novel MLMC estimator for the statistical moments are the key ingredients in the successful adaptation of these methods to different UQ contexts. In particular, [32] and [36] extend and analyze the MLMC algorithm for scalar conservation laws and for systems of conservation laws, respectively. The asymptotic analysis for the MLMC method, presented in [32], showed that the method allows the computation of approximate statistical moments with far lower computational cost than the underlying MC approximation. MLMC methods have been successfully used in compressible fluid dynamics and magnetohydrodynamics and geophysical flows [37], and it has been shown to be efficient in performing UQ for realistic geophysical flows in [38]. Currently, MLMC methods appear to be one of the most suitable methods for UQ in the context of nonlinear hyperbolic PDEs.

Our main aim in this work is to perform an efficient UQ in tsunami modeling by the combination of shallow-water type models and MLMC method and the intensive use of GPUs. The rest of the paper is organized as follows: we present the shallow-water type models commonly used in tsunami simulations in Section 2 and the high order path-conservative schemes to approximate them in Section 3. The Monte Carlo and Multi-level Monte Carlo methods are described in Sections 4 and 5, respectively, and, finally, numerical results are presented in Section 6.

2 Shallow-water type models for tsunami modeling

Let us consider first the well known 2D nonlinear one-layer Shallow-water system:

$$\begin{aligned} \frac{\partial U}{\partial t} + \frac {\partial F_{1}}{\partial x} (U) + \frac{\partial F_{2}}{\partial y} (U) &= S_{1} (U) \frac{\partial H}{\partial x} + S_{2} (U) \frac{\partial H}{\partial y} + S_{F} (U) \end{aligned}$$
(1)

being

$$\begin{aligned}& U = \left( \begin{matrix} h \\ q_{x} \\ q_{y} \end{matrix}\right) , \qquad F_{1} (U) = \left( \begin{matrix} q_{x} \\ \frac{q_{x}^{2}}{h} + \frac{1}{2} g h^{2} \\ \frac{q_{x} q_{y}}{h} \end{matrix}\right) ,\qquad F_{2} (U) = \left( \begin{matrix} q_{y} \\ \frac{q_{x} q_{y}}{h} \\ \frac{q_{y}^{2}}{h} + \frac{1}{2} g h^{2} \end{matrix} \right), \\& S_{1} (U) = \left( \begin{matrix} 0 \\ g h \\ 0 \end{matrix}\right) ,\qquad S_{2} (U) = \left( \begin{matrix} 0 \\ 0 \\ g h \end{matrix}\right) , \\& S_{F} (U) = \big( \begin{matrix} 0& S_{x}(U)& S_{y}(U) \end{matrix}\big) ^{T}. \end{aligned}$$

In the previous system, \(h(\mathbf {x},t)\), denotes the thickness of the water layer at point \(\mathbf {x}\in D \subset\mathbb{R}^{2}\) at time t, being D the horizontal projection of the 3D domain where the tsunami takes place. \(H(\mathbf {x})\) is the depth of the bottom at point \((\mathbf {x})\) measured from a fixed level of reference. Let us also define the function \(\eta(\mathbf {x},t)=h(\mathbf {x},t)-H(\mathbf {x})\) that corresponds to the free surface of the fluid. Let us denote by \(\mathbf {q}(\mathbf {x},t) = ( q_{x}(\mathbf {x},t), q_{y}(\mathbf {x},t) )\) the mass-flow of the water layer at point x at time t. The mass-flow is related to the height-averaged velocity \(\mathbf {u}(\mathbf {x},t)\) by means of the expression: \(\mathbf {q}(\mathbf {x},t) = h(\mathbf {x},t) \mathbf {u}(\mathbf {x},t)\).

The term \(S_{F}(U) \) parametrizes the friction effects and it is given in term of the Manning law:

$$\textstyle\begin{cases} S_{x}(U) = -g h \frac{n^{2}}{h^{4/3}} u_{x} \Vert \mathbf {u}\Vert , \\ S_{y}(U) = -g h \frac{n^{2}}{h^{4/3}} u_{y} \Vert \mathbf {u}\Vert , \end{cases} $$

where \(n> 0\) is the Manning coefficient.

An interesting stationary solution of the previous system corresponds to water at rest solution given by

$$ \textstyle\begin{cases} {\mathbf {u}}=\mathbf {0},\\ h-H=\mbox{constant}. \end{cases} $$
(2)

As pointed out in the Introduction, in the generation stage of earthquake-generated tsunamis, an earthquake subfault model is used to predict the initial sea surface displacement. Typically, those models are given in the form of a set of rectangular patches on the fault plane. Each patch has a set of parameters defining the relative slip of rock on one side of the planar patch to slip on the other side. The minimum set of parameters required is:

  • the length and width of the fault plane (typically in m or km),

  • the latitude and longitude of some point on the fault plane, typically either the centroid or the center of the top (shallowest edge),

  • the depth of the specified point below the sea floor,

  • the strike angle, that is, the orientation of the top edge, measured in degrees clockwise from North and takes values between 0 and 360. The fault plane dips downward to the right when moving along the top edge in the strike direction,

  • the dip angle that is the angle at which the plane dips downward from the top edge. It is a positive angle between 0 and 90 degrees,

  • the rake angle, that is, the angle in the fault plane in which the slip occurs, measured in degrees counterclockwise from the strike direction and takes values between −180 and 180. Note that for a strike-slip earthquake, the rake angle is near to 0 or 180. For a subduction earthquake, the rake angle is usually closer to 90 degrees,

  • the slip distance is the distance (typically in cm or m) of the hanging block that moves relative to the foot block, in the direction specified by the rake angle. The ‘hanging block’ is the one above the dipping fault plane (or to the right if you move in the strike direction). It is always a positive quantity.

The slip on the fault plane(s) must be translated into seafloor deformation. This is often done using the Okada’s model [3], which is derived from a Green’s function solution to the elastic half space problem. Uniform displacement of the solid over a finite rectangular patch specified using the parameters described above, leads to a steady state solution in which the seafloor is deformed. This deformation is transmitted instantaneously to the free surface, generating an initial condition for the shallow-water system. Note that Okada’s model is a rough approximation since the actual seafloor in rarely flat, and the actual earth is not an homogeneous isotropic elastic material as assumed in this model. However, it is often assumed to be a reasonable approximation for the free surface displacement in tsunami simulations, particularly since the fault slip parameters are generally not known very well even for historical earthquakes and so a more accurate modeling of the resulting seafloor deformation may not be justified.

2.1 A two-layer Savage-Hutter type model for simulating landslides generated tsunamis

In [6] a model for the simulation of tsunamis generated by submarine landslides was presented for 1D geometries. Here, we consider its natural extension to 2D domains, such as problems with real bathymetries could be simulated. Following [6], we consider a stratified media composed by a non viscous and homogeneous fluid with constant density \(\rho_{1}\) (water) and a fluidized granular material with density \(\rho_{s}\) and porosity \(\psi_{0}\). We suppose that the fluid and the granular material are immiscible and that the mean density of the granular material is given by: \(\rho_{2} = (1 - \psi_{0}) \rho_{s} + \psi_{0} \rho_{1}\). The following 2D system is derived under the assumption of shallow-flows and could be used to simulate the interaction of a granular landslide with the ambient water (see [6] for details about its derivation in 1D problems):

$$\begin{aligned} \frac{\partial U}{\partial t} + \frac {\partial F_{1}}{\partial x} (U) + \frac{\partial F_{2}}{\partial y} (U) &= B_{1} (U) \frac{\partial U}{\partial x} + B_{2} (U) \frac{\partial U}{\partial y} \\ & \hphantom{=}{}+ S_{1} (U) \frac{\partial H}{\partial x} + S_{2} (U) \frac{\partial H}{\partial y} + S_{F} (U) \end{aligned}$$
(3)

being

$$\begin{aligned}& U = \left( \begin{matrix} h_{1} \\ q_{1,x} \\ q_{1,y} \\ h_{2} \\ q_{2,x} \\ q_{2,y} \end{matrix}\right) ,\qquad F_{1} (U) = \left( \begin{matrix} q_{1,x} \\ \frac{q_{1,x}^{2}}{h_{1}} + \frac{1}{2} g h_{1}^{2} \\ \frac{q_{1,x} q_{1,y}}{h_{1}} \\ q_{2,x} \\ \frac{q_{2,x}^{2}}{h_{2}} + \frac{1}{2} g h_{2}^{2} \\ \frac{q_{2,x} q_{2,y}}{h_{2}} \end{matrix}\right) ,\qquad F_{2} (U) = \left( \begin{matrix} q_{1,y} \\ \frac{q_{1,x} q_{1,y}}{h_{1}} \\ \frac{q_{1,y}^{2}}{h_{1}} + \frac{1}{2} g h_{1}^{2} \\ q_{2,y} \\ \frac{q_{2,x} q_{2,y}}{h_{2}} \\ \frac{q_{2,y}^{2}}{h_{2}} + \frac{1}{2} g h_{2}^{2} \end{matrix}\right) , \\& B_{1} (U) = \left( \begin{matrix} 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -g h_{1} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ -r g h_{2} & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{matrix}\right) ,\qquad S_{1} (U) = \left( \begin{matrix} 0 \\ g h_{1} \\ 0 \\ 0 \\ g h_{2} \\ 0 \end{matrix}\right) , \\& B_{2} (U) = \left( \begin{matrix} 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -g h_{1} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ -r g h_{2} & 0 & 0 & 0 & 0 & 0 \end{matrix}\right) ,\qquad S_{2} (U) = \left( \begin{matrix} 0 \\ 0 \\ g h_{1} \\ 0 \\ 0 \\ g h_{2} \end{matrix}\right) , \\& S_{F} (U) = \big( \begin{matrix} 0& S_{f_{1}}(U)& S_{f_{2}}(U)& 0& S_{f_{3}}(U) + \tau_{x}& S_{f_{4}}(U) + \tau_{y} \end{matrix}\big) ^{T}. \end{aligned}$$

In the previous system, \(h_{l}(\mathbf {x},t)\), \(l=1,2\) denotes the thickness of the water layer (\(l=1\)) and the granular material (\(l=2\)), respectively, at point \(\mathbf {x}\in D \subset\mathbb{R}^{2}\) at time t, being D the horizontal projection of the 3D domain where the landslide and tsunami takes place. \(H(\mathbf {x})\) is the depth of the bottom at point x measured from a fixed level of reference. Let us also define the function \(\eta_{1}(\mathbf {x},t)=h_{1}(\mathbf {x},t)+h_{2}(\mathbf {x},t)-H(\mathbf {x})\) that corresponds to the free surface of the fluid, and \(\eta_{2}(\mathbf {x},t)=h_{2}(\mathbf {x},t)-H(\mathbf {x})\), the interface between the granular layer and the fluid. Let us denote by \(\mathbf {q}_{l}(\mathbf {x},t) = ( q_{l,x}(\mathbf {x},t), q_{l,y}(\mathbf {x},t) )\) the mass-flow of the l-layer at point x at time t. The mass-flow is related to the height-averaged velocity \(\mathbf {u}_{l}(\mathbf {x},t)\) by means of the expression: \(\mathbf {q}_{l}(\mathbf {x},t) = h_{l}(\mathbf {x},t) \mathbf {u}_{l}(\mathbf {x},t)\), \(l = 1, 2\). \(r = \rho_{1} / \rho_{2}\) is the ratio of the constant densities of the layers (\(\rho_{1} < \rho_{2}\)).

The terms \(S_{f_{k}} (U) \), \(k= 1, \ldots, 4\), model the different friction effects, while \(\boldsymbol {\tau }= (\tau_{x}, \tau_{y}) \) is the Coulomb friction law. \(S_{f_{k}} (U)\), \(k = 1, \ldots, 4\), are given by:

$$\begin{aligned}& S_{f_{1}}(U) = S_{c_{x}}(U) + S_{1_{x}}(U),\qquad S_{f_{3}}(U) = -r S_{c_{x}}(U) + S_{2_{x}}(U), \\& S_{f_{2}}(U) = S_{c_{y}}(U) + S_{1_{y}}(U),\qquad S_{f_{4}}(U) = -r S_{c_{y}}(U) + S_{2_{y}}(U). \end{aligned}$$

\(S_{c}(U) = ( S_{c_{x}}(U), S_{c_{y}}(U) )\) parameterizes the friction between the two layers, and is defined as:

$$\textstyle\begin{cases} S_{c_{x}}(U) = m_{f} \frac{h_{1} h_{2}}{h_{2} + r h_{1}} (u_{2,x} - u_{1,x}) \Vert \mathbf {u}_{2} - \mathbf {u}_{1} \Vert ,\\ S_{c_{y}}(U) = m_{f} \frac{h_{1} h_{2}}{h_{2} + r h_{1}} (u_{2,y} - u_{1,y}) \Vert \mathbf {u}_{2} - \mathbf {u}_{1} \Vert , \end{cases} $$

where \(m_{f}\) is a positive constant.

\(S_{l}(U) = ( S_{l_{x}}(U), S_{l_{y}}(U) )\), \(l=1,2\) parameterizes the friction between the fluid and the non-erodible bottom (\(l=1\)) and between the granular material and the non-erodible bottom (\(l=2\)), and both are given by a Manning law

$$\textstyle\begin{cases} S_{l_{x}}(U) = -g h_{l} \frac{n_{l}^{2}}{h_{l}^{4/3}} u_{l,x} \Vert \mathbf {u}_{l} \Vert ,\\ S_{l_{y}}(U) = -g h_{l} \frac{n_{l}^{2}}{h_{l}^{4/3}} u_{l,y} \Vert \mathbf {u}_{l} \Vert , \end{cases}\displaystyle \quad l=1,2, $$

where \(n_{l} > 0\) (\(l=1,2\)) is the Manning coefficient. Note that \(S_{1}(U)\) is only defined where \(h_{2}(x,y,t) = 0\). In this case, \(m_{f} = 0\) and \(n_{2} = 0\). Similarly, if \(h_{1}(x,y,t) = 0\) then \(m_{f} = 0\) and \(n_{1} = 0\).

Finally, the Coulomb friction term \(\boldsymbol {\tau }= (\tau_{x}, \tau_{y})\) controls the stopping mechanism of the landslide and it is defined as follows:

$$\begin{aligned} &{\mbox{If } \Vert \boldsymbol {\tau }\Vert \geq\sigma^{c} \quad \Rightarrow\quad \textstyle\begin{cases} \tau_{x} = -g (1-r) h_{2} \frac{q_{2,x}}{\Vert \mathbf {q}_{2} \Vert } \tan(\alpha), \\ \tau_{y} = -g (1-r) h_{2} \frac{q_{2,y}}{\Vert \mathbf {q}_{2} \Vert } \tan(\alpha), \end{cases}\displaystyle } \\ &{\mbox{If } \Vert \boldsymbol {\tau }\Vert < \sigma^{c} \quad \Rightarrow\quad q_{2,x} = 0, q_{2,y} = 0, } \end{aligned}$$

where \(\sigma^{c} = g (1-r) h_{2} \tan(\alpha)\), being α the Coulomb friction angle. Let us remark that r is set to zero in \(\sigma^{c}\) and τ, if \(h_{1}(x,y,t)=0\), that is, if it is an aerial landslide.

Note that the previous model reduces to the usual one-layer shallow-water system if \(h_{2}=0\) and to the Savage-Hutter model if \(h_{1}=0\).

Finally, some stationary solutions of interest for the above system are those of water at rest, i.e., \({\mathbf {u}}_{l}={\mathbf{0}}\), \(l=1,2\) are given by:

$$ \textstyle\begin{cases}{ \mathbf {u}}_{1}={\mathbf {u}}_{2}=0,\\ h_{1}+h_{2}-H=\mbox{constant},\\ \partial_{x}(h_{2}-H)< \tan(\delta_{0}), \\ \partial_{y}(h_{2}-H)< \tan (\delta_{0}) \end{cases} $$
(4)

and, in particular,

$$ \textstyle\begin{cases} {\mathbf {u}}_{1}={\mathbf {u}}_{2}=0,\\ h_{1}+h_{2}-H=\mbox{constant},\\ h_{2}-H=\mbox{constant}, \end{cases} $$
(5)

are solutions of the previous system.

Notice that both systems (1) and (3) can be rewritten as

$$ W_{t}+\mathcal{A}_{1}(W)W_{x}+ \mathcal{A}_{2}(W)W_{y}=\widetilde{S}_{F}(W), $$
(6)

by considering \(W=[ U,H]^{T}\) and

$$\mathcal{A}_{i}(W)= \left( \begin{matrix} J_{i}(U)-B_{i}(U) & -S_{i}(U) \\ 0 & 0 \end{matrix}\right) ,\quad i=1,2, $$

where \(J_{i}(U)=\frac{\partial F_{i}}{\partial U} (U)\), \(i=1,2\) denote the Jacobians of the fluxes \(F_{i}\), \(i=1,2\) and

$$\widetilde{S}_{F}(W)= \left( \begin{matrix} S_{F}(U) \\ 0 \end{matrix}\right) . $$

The term \(\widetilde{S}_{F}(W)\), that corresponds to the different parameterizations of the friction terms, will be discretized semi-implicitly as in [6, 39] or [40]. Therefore, at this stage will be neglected, and we consider the homogeneous system

$$ W_{t}+\mathcal{A}_{1}(W)W_{x}+ \mathcal{A}_{2}(W)W_{y}=0, $$
(7)

where \(W(\mathbf{x},t)\) takes values on a convex domain Ω of \(\mathbb{R}^{N}\) and \(\mathcal{A}_{i}\), \(i=1,2\), are two smooth and locally bounded matrix-valued functions from Ω to \(\mathcal {M}_{N \times N}(\mathbb{R})\). We also assume that (7) is strictly hyperbolic, i.e. for all \(W \in\Omega\) and \(\forall{ \boldsymbol {\eta }}=(\eta_{x},\eta _{y}) \in\mathbb{R}^{2}\), the matrix

$$\mathcal{A}(W,{\boldsymbol {\eta }})=\mathcal{A}_{1}(W)\eta_{x}+ \mathcal{A}_{2}(W)\eta_{y} $$

has N real and distinct eigenvalues

$$\lambda_{1}(W,\boldsymbol {\eta }) < \cdots< \lambda_{N}(W,\boldsymbol {\eta }) $$

and \(\mathcal{A}(W,\boldsymbol {\eta })\) is thus diagonalizable.

The nonconservative products \(\mathcal{A}_{1}(W)W_{x}\) and \(\mathcal {A}_{2}(W)W_{y}\) do not make sense as distributions if W is discontinuous. However, the theory developed by Dal Maso, LeFloch and Murat in [7] allows to give a rigorous definition of nonconservative products as bounded measures provided that a family of Lipschitz continuous paths \(\Phi\colon[0,1]\times\Omega\times \Omega\times\mathcal{S}^{1}\to\Omega\) is prescribed, where \(\mathcal {S}^{1} \subset\mathbb{R}^{2}\) denotes the unit sphere. This family must satisfy certain natural regularity conditions, in particular:

  1. 1.

    \(\Phi(0;W_{L},W_{R}, \boldsymbol {\eta })=W_{L}\) and \(\Phi(1;W_{L},W_{R}, \boldsymbol {\eta })=W_{R}\), for any \(W_{L},W_{R}\in\Omega\), \(\boldsymbol {\eta }\in\mathcal{S}^{1}\).

  2. 2.

    \(\Phi(s;W_{L},W_{R}, \boldsymbol {\eta }) = \Phi(1-s;W_{R},W_{L}, -\boldsymbol {\eta })\), for any \(W_{L},W_{R}\in\Omega\), \(s \in[0,1]\), \(\boldsymbol {\eta }\in\mathcal{S}^{1}\).

The choice of this family of paths should be based on the physics of the problem: for instance, it should be based on the viscous profiles corresponding to a regularized system in which some of the neglected terms (e.g. the viscous terms) are taken into account. Unfortunately, the explicit calculations of viscous profiles for a regularization of (7) is in general a difficult task. A detailed description of how paths can be chosen is discussed in [8]. An alternative is to choose the ‘canonical’ choice given by the family of segments:

$$ \Phi(s; W_{L}, W_{R}, \boldsymbol {\eta }) = W_{L} + s(W_{R} - W_{L}), $$
(8)

that corresponds to the definition of nonconservative products proposed by Volpert (see [41]). As shown in [42], segments paths is a sensible choice as the it will provide third order approximation in the phase plane of the correct jump condition.

Suppose that a family of paths Φ in Ω has been chosen. Then a piecewise regular function W is a weak solution of (7) if and only if the two following conditions are satisfied:

  1. (i)

    W is a classical solution where it is smooth.

  2. (ii)

    At every point of a discontinuity W satisfies the jump condition

    $$ \int_{0}^{1} \bigl(\sigma\mathcal{I}- \mathcal{A} \bigl(\Phi \bigl(s;W^{-},W^{+}, \boldsymbol {\eta }\bigr), \boldsymbol {\eta }\bigr) \bigr) \frac {\partial\Phi}{\partial s} \bigl(s;W^{-},W^{+}, \boldsymbol {\eta }\bigr)\,ds=0, $$
    (9)

    where \(\mathcal{I}\) is the identity matrix; σ, the speed of propagation of the discontinuity; η a unit vector normal to the discontinuity at the considered point; and \(W^{-}\), \(W^{+}\), the lateral limits of the solution at the discontinuity.

As in conservative systems, together with the definition of weak solutions, a notion of entropy has to be chosen. We will assume here that the system can be endowed with an entropy pair \((\mathcal{H}, \mathbf{G})\), i.e. a pair of regular functions \(\mathcal{H}: \Omega \to\mathbb{R}\) and \(\mathbf{G} = (G_{1}, G_{2}): \Omega\to\mathbb{R}^{2}\) such that:

$$\nabla G_{i}(W) = \nabla\mathcal{H}(W) \cdot\mathcal{A}_{i}(W), \quad \forall W \in\Omega, i =1,2. $$

Then, a weak solution is said to be an entropy solution if it satisfies the inequality

$$\partial_{t} \mathcal{H}(W) + \partial_{x}G_{1}(W) + \partial_{y}G_{2} (W) \leq0, $$

in the sense of distributions.

3 High-order finite volume schemes

To discretize (7) the computational domain D is decomposed into subsets with a simple geometry, called cells or finite volumes: \(V_{i} \subset \mathbb {R}^{2}\). It is assumed that the cells are closed convex polygons whose intersections are either empty, a complete edge or a vertex. Denote by \(\mathcal{T}\) the mesh, i.e., the set of cells, and by NV the number of cells. In this work, we consider rectangular structured meshes, but the derivation of the scheme is done for arbitrary meshes.

Given a finite volume \(V_{i}\), \(\vert V_{i}\vert\) will represent its area; \(N_{i} \in \mathbb{R}^{2}\) its center; \(\mathcal{N}_{i}\) the set of indexes j such that \(V_{j}\) is a neighbor of \(V_{i}\); \(E_{ij}\) the common edge of two neighboring cells \(V_{i}\) and \(V_{j}\), and \(\vert E_{ij} \vert\) its length; \(d_{ij}\) the distance from \(N_{i}\) to \(E_{ij}\); \(\boldsymbol {\eta }_{ij}=(\eta_{ij,x},\eta_{ij,y})\) the normal unit vector at the edge \(E_{ij}\) pointing towards the cell \(V_{j}\); \(W_{i}^{n}\) the constant approximation to the average of the solution in the cell \(V_{i}\) at time \(t^{n}\) provided by the numerical scheme:

$$W_{i}^{n}\cong\frac{1}{\vert V_{i}\vert} \int_{V_{i}}W \bigl(\mathbf {x},t^{n} \bigr)\,d\mathbf {x}. $$

Given a family of paths Φ, a Roe linearization of system (7) is a function

$$\mathcal{A}_{\Phi}\colon\Omega\times\Omega\times S^{1} \rightarrow\mathcal{M}_{N}(\mathbb {R}) $$

satisfying the following properties for each \(W_{L}, W_{R}\in\Omega\) and \(\boldsymbol {\eta }\in S^{1}\):

  1. 1.

    \(\mathcal{A}_{\Phi}(W_{L},W_{R}, \boldsymbol {\eta })\) has N distinct real eigenvalues

    $$\lambda_{1} (W_{L},W_{R}, \boldsymbol {\eta })< \lambda_{2}(W_{L},W_{R}, \boldsymbol {\eta }) < \cdots< \lambda_{N}(W_{L},W_{R}, \boldsymbol {\eta }). $$
  2. 2.

    \(\mathcal{A}_{\Phi}(W,W, \boldsymbol {\eta })=\mathcal{A}(W, \boldsymbol {\eta })\).

  3. 3.
    $$\begin{aligned}& \mathcal{A}_{\Phi}(W_{L},W_{R}, \boldsymbol {\eta })\cdot(W_{R}-W_{L}) \\& \quad =\int_{0}^{1}\mathcal{A} \bigl(\Phi(s;W_{L},W_{R}, \boldsymbol {\eta }), \boldsymbol {\eta }\bigr) \frac{\partial\Phi}{\partial s}(s;W_{L},W_{R}, \boldsymbol {\eta })\,ds. \end{aligned}$$
    (10)

Note that in the particular case in which \(\mathcal{A}_{k}(W)\), \(k=1,2\), are the Jacobian matrices of smooth flux functions \(F_{k}(W)\), property (10) does not depend on the family of paths and reduces to the usual Roe property:

$$ \mathcal{A}_{\Phi}(W_{L}, W_{R}, \boldsymbol {\eta })\cdot(W_{R} - W_{L}) = F_{\boldsymbol {\eta }} (W_{R}) - F_{\boldsymbol {\eta }}(W_{L}) $$
(11)

for any \(\boldsymbol {\eta }\in S^{1}\), where

$$F_{\boldsymbol {\eta }}(U)=\eta_{x} F_{1}(U)+\eta_{y} F_{2}(U). $$

Given a Roe matrix \(\mathcal{A}_{\Phi}(W_{L},W_{R},\boldsymbol {\eta })\), let us define a decomposition of it as follows:

$$\widehat{\mathcal{A}}^{\pm}_{\Phi}(W_{L},W_{R}, \boldsymbol {\eta }) = \frac {1}{2} \bigl( \mathcal{A}_{\Phi}(W_{L},W_{R}, \boldsymbol {\eta }) \pm\mathcal{Q}_{\Phi}(W_{L},W_{R}, \boldsymbol {\eta }) \bigr), $$

where \(\mathcal{Q}_{\Phi}(W_{L},W_{R}, \boldsymbol {\eta })\) is a definite positive matrix and could be seen as the viscosity matrix associated to the method.

Now, it is straightforward to define a path-conservative scheme in the sense defined in [9] based on the previous decomposition:

$$ W_{i}^{n+1}=W_{i}^{n}- \frac{\Delta t}{\vert V_{i} \vert}\sum_{j \in\mathcal{N}_{i}} \vert E_{ij} \vert\widehat{\mathcal{A}}_{ij}^{-} \cdot \bigl(W_{j}^{n}-W_{i}^{n} \bigr), $$
(12)

where \(\widehat{\mathcal{A}}_{ij}^{-}= \widehat{\mathcal{A}}^{-}_{\Phi}(W_{i},W_{j}, \boldsymbol {\eta }_{ij})\).

Moreover, taking into account the structure of the matrix \(\mathcal {A}_{\Phi}(W_{L},W_{R}, \boldsymbol {\eta })\), it is possible to rewrite (12) for the system (1) or (3) as follows:

$$ U_{i}^{n+1}=U_{i}^{n} - \frac{\Delta t}{\vert V_{i} \vert} \sum_{j \in\mathcal{N}_{i}} \vert E_{ij} \vert\widehat{D}_{\Phi}^{-}(U_{i}, U_{j}, H_{i}, H_{j}, \boldsymbol {\eta }_{ij}), $$
(13)

where

$$\begin{aligned} \widehat{D}_{\phi}^{\pm}(U_{i}, U_{j},H_{i},H_{j}, \boldsymbol {\eta }_{ij}) =& \frac {1}{2} \bigl(F_{\boldsymbol {\eta }_{ij}}(W_{j})-F_{\boldsymbol {\eta }_{ij}}(W_{i}) -B_{ij}\cdot(U_{j}-U_{i}) \\ &{} -S_{ij}(H_{j}-H_{i}) \\ &{} \pm Q_{ij}\cdot\bigl(U_{j}-U_{i} - A_{ij}^{-1}\cdot S_{ij}(H_{j}-H_{i}) \bigr) \bigr), \end{aligned}$$
(14)

where the path is supposed to be given by \(\Phi=(\Phi_{U} \Phi_{H})^{T}\) and

$$\begin{aligned} B_{ij}\cdot(U_{j} -U_{i}) =& B_{\Phi}(W_{i}, W_{j},\boldsymbol {\eta }_{ij}) \cdot (U_{j}- U_{i}) \\ =& \int_{0}^{1} B_{\boldsymbol {\eta }_{ij}}\bigl( \Phi_{U}(s;W_{i}, W_{j}, \boldsymbol {\eta }_{ij}) \bigr) \frac{\partial\Phi_{U}}{\partial s}(s; W_{i},W_{j}, \boldsymbol {\eta }_{ij})\,ds \end{aligned}$$

with

$$\begin{aligned}& B_{\boldsymbol {\eta }}(U)=\eta_{x} B_{1}(U) + \eta_{y} B_{2}(U); \\& S_{ij}(H_{j}-H_{i}) = S_{\Phi}(W_{i}, W_{j}, \boldsymbol {\eta }_{ij}) (H_{j}-H_{i}) \\& \hphantom{S_{ij}(H_{j}-H_{i})}= \int_{0}^{1} S_{\boldsymbol {\eta }_{ij}}\bigl( \Phi_{U}(s; W_{i}, W_{j}, \boldsymbol {\eta }_{ij}) \bigr) \frac{\partial\Phi_{H}}{\partial s}(s; W_{i},W_{j}, \boldsymbol {\eta }_{ij})\,ds \end{aligned}$$

with

$$S_{\boldsymbol {\eta }}(U)=\eta_{x} S_{1}(U)+\eta_{y} S_{2}(U). $$

The matrix \(A_{ij}\) is defined as follows

$$A_{ij}=A_{\Phi}(W_{i},W_{j}, \boldsymbol {\eta })= J_{ij} + B_{ij}, $$

where \(J_{ij}\) is a Roe matrix for the flux \(F_{\boldsymbol {\eta }}(U)\), that is

$$J_{ij} \cdot(U_{j}-U_{i})=F_{\boldsymbol {\eta }_{ij}}(U_{j})-F_{\boldsymbol {\eta }_{ij}}(U_{i}). $$

Considering segment as paths, it is straightforward to compute the matrices \(B_{ij}\), \(J_{ij}\) and the vector \(S_{ij}\) for system (1) and (3) (see [43] for the detailed expressions).

Finally, in order to fully define the numerical scheme (13)-(14), the matrix \(Q_{ij}\), that plays the role of the viscosity matrix, should be defined. Thus, Roe method is obtained if \(Q_{ij}=\vert A_{ij}\vert\). Note that with this choice one needs to perform the complete spectral decomposition of matrix \(A_{ij}\). In many situations, as in the case of the multilayer shallow-water system, it is not possible to obtain an analytical expression of the eigenvalues and eigenvectors, and some numerical algorithm should be used to perform the spectral decomposition of matrix \(A_{ij}\), increasing the computational cost of the Roe method.

A rough approximation to this problem is given by the local Lax-Friedrichs (or Rusanov) method, in which the matrix \(Q_{ij}\) could be seen as an approximation of \(\vert A_{ij}\vert\) given by a diagonal matrix defined in terms of the largest eigenvalue of \(A_{ij}\) in absolute value. However, this approach gives excessive dissipation for all the waves corresponding to the other eigenvalues. At the beginning of the eighties, Harten, Lax and van Leer [44] realized that \(\vert A_{ij}\vert\) could be computed as a polynomial evaluation \(p(A_{ij})\), where \(p(x)\) interpolates \(\vert x\vert\) at the eigenvalues of \(A_{ij}\). Then they proposed to use a linear interpolation based on the smallest and largest eigenvalues of \(A_{ij}\), which results in a considerable improvement over local Lax-Friedrichs at a computational cost much lower than the one required for the full computation of \(\vert A_{ij}\vert\). This idea is the basis of the HLL method.

To our knowledge, the paper by Degond et al. [45] contains the first attempt to construct a simple approximation of \(\vert A_{ij}\vert\) by means of a polynomial that approximates \(\vert x\vert\) without interpolating it exactly on the eigenvalues. This approach has been extended to a general framework in a recent paper [46], where the authors introduce the so-called PVM (Polynomial Viscosity Matrix) methods, which are defined in terms of viscosity matrices based on general polynomial evaluations of a given Roe matrix or the Jacobian of the flux at some other average value.

Stability issues require that the graph of the polynomial defining a PVM method must be over the graph of the absolute value function. Moreover, the behavior of a PVM scheme will be closer to that of Roe’s method as its basis polynomial is closer to \(\vert x\vert\) in the uniform norm. This fact suggests the idea of using accurate approximations of \(\vert x\vert\) to build PVM schemes that give comparable results to Roe’s method, but with a much smaller computational cost. Following this idea, in [47] authors propose a new PVM scheme based on Chebyshev polynomials, which provide optimal uniform approximations to \(\vert x\vert\). Moreover, the order of approximation to \(\vert x\vert\) can be greatly improved by using rational functions instead of polynomials. This allows us to define a new family of schemes called RVM (Rational Viscosity Matrix) using appropriate rational functions to construct the viscosity matrices \(Q_{ij}\). It is important to point out that RVM methods constitute a class of general-purpose Riemann solvers, that are constructed using a Roe matrix \(A_{ij}\) (or more generally the matrix \(A(U, \boldsymbol {\eta })\) evaluated at some average computed from the right and left states) and an estimate of its spectral radius, without making use of the spectral decomposition of the Roe matrix.

Concerning the convergence of path-conservative schemes in presence of non-conservative products, in [48] and [8] it has been proved that, in general, the numerical solutions provided by a path-conservative numerical scheme converge to functions which solve a perturbed system in which an error source-term appears on the right-hand side. The appearance of this source term, which is a measure supported on the discontinuities, has been first observed in [49] when a scalar conservation law is discretized by means of a nonconservative numerical method. Nevertheless, in certain special situations the convergence error vanishes for finite difference methods: this is the case for systems of balance laws (see [50]). Moreover for more general problems, even when the convergence error is present, it may be only noticeable for very fine meshes, for discontinuities of large amplitude, and/or for large-time simulations: see [8, 48] for details.

Finally, as usual, a CFL condition must be imposed to ensure stability:

$$ \Delta t\cdot\max \biggl\{ \frac{\vert\lambda _{ij,k}\vert}{d_{ij}}; i =1, \dots, NV, j \in{\mathcal {N}}_{i}, k=1,\ldots,N \biggr\} = \delta, $$
(15)

with \(0<\delta\leq1\).

3.1 High-order extension

Following [43], the semidiscrete expression of the high-order extension of scheme (13)-(14), based on a given conservative reconstruction operator, is the following:

$$\begin{aligned} U_{i}'(t) =& - \frac{1}{\vert V_{i}\vert} \sum_{j\in\mathcal{N}_{i}} \int_{E_{ij}} \widehat{D}_{\Phi}^{-} \bigl(U_{ij}^{-}( \gamma, t), U_{ij}^{+}(\gamma, t), H_{ij}^{-}(\gamma), H_{ij}^{+}(\gamma), \boldsymbol {\eta }_{ij} \bigr)\,d\gamma \\ &{}- \frac{1}{\vert V_{i}\vert} \sum_{j\in\mathcal{N}_{i}} \int_{E_{ij}} F_{\boldsymbol {\eta }_{ij}} \bigl(U_{ij}^{-}(\gamma,t) \bigr)\,d\gamma \\ &{}+ \frac{1}{\vert V_{i}\vert} \int_{V_{i}} B_{1} \bigl(P_{i}^{t}( \mathbf {x}) \bigr)\frac{\partial P_{i}^{t}}{\partial x}(\mathbf {x}) +B_{2} \bigl(P_{i}^{t}( \mathbf {x}) \bigr)\frac{\partial P_{i}^{t}}{\partial y}(\mathbf {x})\,d\mathbf {x} \\ &{}+ \frac{1}{\vert V_{i}\vert} \int_{V_{i}} S_{1} \bigl(P_{i}^{t}( \mathbf {x}) \bigr)\frac{\partial P_{i}^{H}}{\partial x}(\mathbf {x}) +S_{2} \bigl(P_{i}^{t}( \mathbf {x}) \bigr)\frac{\partial P_{i}^{H}}{\partial y}(\mathbf {x})\,d\mathbf {x}, \end{aligned}$$
(16)

where \(P^{t}_{i}\) is the reconstruction approximation function at time t of \(U_{i}(t)\) at cell \(V_{i}\) defined using the stencil \(\mathcal{B}_{i}\):

$$P_{i}^{t}(\mathbf {x})=P_{i} \bigl(\mathbf {x}; \bigl\{ U_{j}(t) \bigr\} _{ j \in\mathcal{B}_{i}} \bigr), $$

and \(P^{H}_{i}\) is the reconstruction approximation function of H. The functions \(U_{ij}^{\pm}(\gamma,t)\) are given by

$$U_{ij}^{-}(\gamma,t)=\lim_{\mathbf {x}\rightarrow\gamma}P_{i}^{t}( \mathbf {x}),\qquad U_{ij}^{+}(\gamma,t)=\lim_{\mathbf {x}\rightarrow\gamma}P_{j}^{t}( \mathbf {x}), $$

and \(H_{ij}^{\pm}(\gamma)\) are given by

$$H_{ij}^{-}(\gamma)=\lim_{\mathbf {x}\rightarrow\gamma}P_{i}^{H}( \mathbf {x}),\qquad H_{ij}^{+}(\gamma)=\lim_{\mathbf {x}\rightarrow\gamma}P_{j}^{H}( \mathbf {x}). $$

In practice, the integral terms in (16) must be approximated numerically using high order quadrature formula, that must be related to the order of approximation of the reconstruction operator (see [43] for more details). Here, a MUSCL type reconstruction operator ([51]) that achieves second order accuracy is used. For time stepping, second order high-order TVD Runge-Kutta method described in [52] is used.

The well-balancedness properties of scheme (16) and the relation between the reconstructed variables, the reconstruction operators and the quadrature formulas have been analyzed in [43].

4 Monte Carlo method

4.1 Modeling uncertain inputs

As mentioned in the Introduction, it is very difficult to measure in a precise way some of the physical parameters present in the systems (1) or (3). This is the case of the parameters present in the friction terms or the ratio of densities in a non-homogeneous media, or the parameters describing the fault deformation in the Okada’s model. Uncertainty in input values for these parameters leads to uncertainty in the solution of the system (1) or (3). Therefore, noting by \((\Sigma,F, \mathbb {P})\) the complete probability space, \(U(t,\mathbf {x},\xi)\), \(\xi\in\Sigma\) is the solution of the system

$$\begin{aligned}& U_{t}+F_{1}(U)_{x}+ F_{2}(U)_{y} \\& \quad=B_{1}(U,\xi)U_{x} + B_{2}(U, \xi)U_{y}+S_{1}(U)H_{x} + S_{2}(U)H_{y}+S_{F}(U, \xi). \end{aligned}$$
(17)

4.2 Monte Carlo finite volume method

In order to approximate the random system of equations (17), we need to discretize the probability space. The simplest sampling method is the Monte Carlo (MC) algorithm that consists of the following steps:

  1. 1.

    Sample: We draw M independent identically distributed (i.i.d.) samples \(\xi_{k}\) of the random fields.

  2. 2.

    Solve: For each realization of the parameters \(\xi_{k}\), the underlying system is solved. Let the finite volume solutions be denoted by \(U_{\mathcal {T}}^{k,n}\), i.e. by cell averages \(\{U_{i}^{k,n}: V_{i}\in \mathcal {T}\}\) at time level \(t^{n}\),

    $$U_{\mathcal {T}}^{k,n}(x)=U_{i}^{k,n},\quad \forall x\in V_{i}, V_{i}\in \mathcal {T}. $$
  3. 3.

    Estimate statistics: We estimate the expectation of the random solution field with the sample mean (ensemble average) of the approximate solution:

    $$ E_{M} \bigl[U_{\mathcal {T}}^{n} \bigr]:= \frac{1}{M} \sum_{k=1}^{M} U_{\mathcal {T}}^{k,n}. $$
    (18)

    Higher statistical moments can be approximated analogously (see [32]).

The above algorithm is quite simple to implement. We remark that step 1 requires a (pseudo) random number generator (PRNG). In this work we will use the Mersenne Twister PRNG [53], which has a period of \(2^{19{,}937}-1\). In step 2, an existing code can be used. Furthermore, the only (data) interaction between different samples is in step 3 when ensemble averages are computed. Thus, the MC is non-intrusive as well as easily parallelizable.

Although a rigorous error estimate for the MC approximating the shallow-water systems (1) or (3) is currently out of reach, we rely on the analysis for a scalar conservation law (see [32]) and on the numerical experience with the MLMC-FV solution of non-linear hyperbolic systems of conservation laws with random initial data (see [36]) to postulate that the following estimate holds if the solution has finite second moments:

$$ \bigl\Vert \mathbb {E}\bigl[U \bigl(\cdot,t^{n} \bigr) \bigr]-E_{M} \bigl[U_{\mathcal {T}}^{n} \bigr] \bigr\Vert _{L^{2}(\Sigma;L^{1}(D))} \leq C_{\mathrm{stat}}M^{-1/2} + C_{st} \Delta x^{s}. $$
(19)

Here, the \(L^{2}(\Sigma;L^{1}(D))\)-norm of the random function \(f(\cdot ,\xi)\) is defined as

$$\Vert f\Vert _{L^{2}(\Sigma;L^{1}(D))}:= \biggl( \int_{w\in\Sigma} \bigl\Vert f(\cdot,\xi) \bigr\Vert ^{2}_{L^{1}(D)}\,d\mathbb {P}(\xi) \biggr)^{\frac{1}{2}}, $$

and \(C_{\mathrm{stat}}\), \(C_{st}\) are constants that depend on the domain D, the initial condition, topography, time horizon T and the statistics of different random parameters, in particular, on mean and variance. In the above, we have assumed that the underlying finite volume scheme converges to the solutions of the deterministic system (1) or (3) at a rate of \(s > 0\). Moreover, in (19) and throughout the following, we adopted the (customary in the analysis of MC methods) convention to interpret the MC samples \(U_{\mathcal {T}}^{k,n}\) in (18) as i.i.d. random functions, with the same law as U. Based on the error analysis of [32], we need to choose

$$ M=\mathcal {O} \bigl(\Delta x^{-2s} \bigr) $$
(20)

in order to equilibrate the statistical error with the spatio-temporal error in (19).

Consequently, it is straightforward to deduce that the asymptotic error vs. (computational) work estimate is given by (see [32])

$$\bigl\Vert \mathbb {E}\bigl[U \bigl(\cdot,t^{n} \bigr) \bigr]-E_{M} \bigl[U_{\mathcal {T}}^{n} \bigr] \bigr\Vert _{L^{2}(\Sigma;L^{1}(D))} \lesssim (\mathrm{Work})^{-s/(d+1+2s)}, $$

where d is the space dimension (in this paper \(d=1\) or \(d=2\)). The above error vs. work estimate is considerably more expensive when compared to the deterministic FVM error which scales as \((\mbox{Work})^{-s/d+1}\). We see that in the situation of low order s of convergence and space dimension, a considerably reduced rate of convergence of the MC-FVM, in terms of accuracy vs. work, is obtained. On the other hand, for high order schemes (i.e. \(s\gg d+1\)) the MC error dominates and we obtain the rate \(1/2\) in terms of work which is typical of MC methods.

5 Multi level Monte Carlo finite volume method

Given the slow convergence of MC-FV, [32] and [36] proposed the Multi-Level Monte Carlo finite volume method (MLMC-FV). The key idea behind MLMC-FV is to simultaneously draw MC samples on a hierarchy of nested grids.

The algorithm consists of the following four steps:

  1. 1.

    Nested meshes: Consider nested meshes \(\{\mathcal {T}_{l}\} _{l=0}^{\infty}\) of the spatial domain D with corresponding mesh diameters \(\Delta x_{l}\) that satisfy:

    $$\Delta x_{l}= \sup \bigl\{ \operatorname{diam}(V_{i}) : V_{i} \in \mathcal {T}_{l} \bigr\} = \mathcal {O} \bigl(2^{-l} \Delta x_{0} \bigr),\quad l\in\mathbb{N}_{0} $$

    where \(\Delta x_{0}\) is the mesh width for the coarsest resolution and corresponds to the lowest level \(l=0\).

  2. 2.

    Sampling: For each level of resolution \(l\in\mathbb {N}_{0}\), we draw \(M_{l}\) independent identically distributed (i.i.d.) samples of \(\xi_{l}^{k}\), \(k = 1, 2,\ldots ,M_{l}\) belonging to the set of admissible parameters for the model.

  3. 3.

    Solving: For each resolution level l and each realization \(\xi_{l}^{k}\), the underlying system (17) is solved using mesh \(\mathcal {T}_{l}\). Let the finite volume solutions be denoted by \(U_{\mathcal {T}_{l}}^{k,n}\) for the mesh \(\mathcal {T}_{l}\) and at the time level \(t_{n}\).

  4. 4.

    Estimate solution statistics: Fix some positive integer \(L < \infty\) corresponding to the highest level. We estimate the expectation of the random solution field with the following estimator:

    $$ E^{L} \bigl[U \bigl(\cdot,t^{n} \bigr) \bigr]:= E_{M_{0}} \bigl[w_{\mathcal {T}_{0}}^{n} \bigr] + \sum _{l=1}^{L} E_{M_{l}} \bigl[U_{\mathcal {T}_{l}}^{n} - U_{\mathcal {T}_{l-1}}^{n} \bigr], $$
    (21)

    with \(E_{M_{l}}\) being the MC estimator

    $$ E_{M_{l}} \bigl[U_{\mathcal {T}}^{n} \bigr]:= \frac{1}{M_{l}} \sum_{k=1}^{M_{l}} U_{\mathcal {T}}^{k,n} $$
    (22)

    for the level l. Higher statistical moments can be approximated analogously. (See [32].)

MLMC-FV is non-intrusive as any standard FVM code can be used in step 3. Furthermore, MLMC-FV is amenable to efficient parallelization as data from different grid resolutions and different samples only interacts in step 4.

Following the rigorous estimate for error in [32, 36], we consider

$$ \bigl\Vert \mathbb {E}\bigl[U \bigl(\cdot ,t^{n} \bigr) \bigr]-E^{L} \bigl[U_{\mathcal {T}}^{n} \bigr] \bigr\Vert _{L^{2}(\Sigma;L^{1}(D))} \leq C_{1}\Delta x^{s}_{L} + C_{2} \Biggl\{ \sum_{l=0}^{L} M_{l}^{-1/2}\Delta x_{l}^{s} \Biggr\} + C_{3} M_{0}^{-1/2}. $$
(23)

Here s again refers to the convergence rate of the deterministic finite volume scheme and \(C_{1,2,3}\) are constants depending only on the initial data, the parameters and the source term. From the error estimate (23), we obtain that the number of samples to equilibrate the statistical and spatio-temporal discretization errors in (21) is given by

$$M_{l}=\mathcal {O} \bigl(2^{2(L-l)s} \bigr). $$

Notice that the choice of \(M_{l}\) implies that the largest number of MC samples is required on the coarsest mesh level \(l=0\), whereas only a small fixed number of MC samples are needed on the finest discretization levels.

The corresponding error vs. work estimate for MLMC-FV is given by (see [32, 36])

$$ \bigl\Vert \mathbb {E}\bigl[U \bigl(\cdot ,t^{n} \bigr) \bigr]-E^{L} \bigl[U_{\mathcal {T}}^{n} \bigr] \bigr\Vert _{L^{2}(\Sigma;L^{1}(D))} \lesssim(\mathrm{Work})^{-s/(d+1)} \cdot\log ( \mathrm{Work}), $$
(24)

provided \(s < (d + 1)/2\). The above estimate shows that MLMC-FV is more efficient than MC-FV. Also, MLMC-FV is (asymptotically) of the same complexity as a single deterministic FVM solve.

6 Numerical experiments

6.1 Submarine landslide over a flat bottom topography

As a first numerical experiment, we consider a 1D example where the computational domain is \(x \in[-5,5]\) with transparent boundary conditions at both boundaries, and a flat bottom topography, specified by \(H(x) = 2\). The initial data for the problem is

$$\begin{aligned}& h_{1}(x,0)= \textstyle\begin{cases} 2 & \mbox{if } \vert x\vert\geq1, \\ 0.5 & \mbox{if } \vert x\vert< 1, \end{cases}\displaystyle \\& h_{2}(x,0)= \textstyle\begin{cases} 0.5 & \mbox{if } \vert x\vert\geq1, \\ 1.5 & \mbox{if } \vert x\vert< 1, \end{cases}\displaystyle \end{aligned}$$

and

$$ u_{1}(x,0)=0,\qquad u_{2}(x,0)=0. $$

Hence, our aim is to simulate a submarine landslide as the fluidized granular matter, denoted by the index 2 will slide under the water surface (denoted by index 1) and will initiate a flow of the free surface.

We consider in this example that the ratio of the densities of two layers r, the Coulomb angle \(\delta_{0}\) and the interlayer friction parameter \(c_{f}\) are uncertain. Here, we assume that these parameters are random variables that take values from a uniform distribution with following mean values,

$$c_{f} = 0.0001,\qquad r = 0.5,\qquad \delta_{0} = 35^{\circ}. $$

Furthermore the variation is assumed to 40% over the mean for each of the three uniformly distributed uncertain parameters. Such a variability is fairly representative of the variability in experiments and observations.

We will perform UQ for the two-layer Savage-Hutter system with the above uncertain inputs using both a first- and second-order path-conservative scheme described in Section 3 whose viscosity matrix is the one associated to the IFCP-PVM scheme described in [54] and both the Monte Carlo (MC) as well as Multi-level Monte Carlo methods, described in Sections 4 and 5, respectively, to discretize the probability space.

The resulting mean and mean ± standard deviation (statistical spread) are presented in Figures 1 and 2. In Figure 1, we present statistics of the heights of both layers at time \(t=0.3\) seconds using a first-order scheme on a fine mesh of \(1\mbox{,}024\) cells. The scheme is combined with a MC simulation with \(M=1\mbox{,}024\) samples. Such a choice of sample number is based on the fact that the MC sample number should be chosen by the formula \(M=(\Delta x)^{-2s}\) in (20). This reduces to \(M=N\), with N being the number of cells in the current simulation as the convergence rate is \(s=1/2\) for a first-order scheme. Similarly, the MLMC method (together with the first-order FVM scheme) is based on choosing \(L=6\) levels of mesh resolution, ranging from \(N_{0}=32\) cells up to \(N_{5}=1\mbox{,}024\) cells. We choose \(M_{5}=16\) samples for the highest level of resolution.

Figure 1
figure 1

First order MC-FVM and MLMC-FVM methods for the submarine landslide. The mean and standard deviation of the layer heights at \(t=0.3\) seconds are shown.

Figure 2
figure 2

Second order MC-FVM and MLMC-FVM methods for the submarine landslide. The mean and standard deviation of the layer heights at \(t=0.3\) seconds are shown.

The statistics for the height of both layers simulated with the second-order version of the IFCP scheme, together with MC and MLMC discretizations of the probability space are presented in Figure 2. Again, we choose a fine mesh resolution of \(1\mbox{,}024\) cells. \(M=512\) samples are used for the MC-FVM method. As in the first-order case, \(L=6\) levels of mesh resolution are used to specify the second order MLMC-FVM method, ranging from \(N_{0}=32\) cells up to \(N_{5}=1\mbox{,}024\) cells. We choose \(M_{5}=16\) samples for the highest level of resolution.

Comparing the sets of methods, we observe that the second-order IFCP method resolve the waves more sharply, even though the first-order method is quite competitive. Furthermore, the MC and MLMC methods are fairly comparable at the same mesh resolution. In order to compare the methods quantitatively, we compute a reference solution on a fine mesh and with a large number of samples, and plot the error vs. resolution as well as error vs. runtime for both the mean and the variance of the outer layer height \(h_{1}\) and display the results in Figure 3. Note that the statistical errors are estimated by a procedure, first introduced in [32]. As shown in this figure, the second-order IFCP method is superior in the amplitude of the error (for both mean and variance) than the first-order method, when combined with both the MC and MLMC discretization of the probability space. On the other hand, the MC and MLMC methods (either combined with the first-order or the second-order IFCP scheme) are very similar when it comes to the amplitude of the error for the same mesh resolution. The main difference between the methods is discovered when the computational efficiency, measured in terms of error vs. run-time, is compared. As seen in Figure 3 (right column), the MLMC methods are approximately 60 to 80 times faster than the corresponding MC methods, for the same level of error. This almost two orders of magnitude gain in efficiency with the MLMC methods is instrumental in their utility for performing more realistic UQ simulations at an acceptable computational cost.

Figure 3
figure 3

Convergence of estimated mean and variance of \(\pmb{h_{1}}\) in the submarine landslide at \(\pmb{t=0.3}\) seconds. First and second order comparison of MC-FVM and MLMC-FVM methods.

6.1.1 2D Lituya Bay mega-tsunami

On July 9, 1958, an 8.3 magnitude (rated on the Richter scale) earthquake, along the Fairweather fault, triggered a major subaerial landslide into the Gilbert Inlet at the head of Lituya Bay on the southern coast of Alaska (USA). The landslide impacted the water at a very high speed generating a giant tsunami with the highest recorded wave run-up in history. The mega-tsunami run-up was up to an elevation of 524 m and caused total destruction of the forest as well as erosion down to the bedrock on a spur ridge, along the slide axis. Many attempts have been made to understand and simulate this mega tsunami. The aim of this section is to produce a realistic, detailed and accurate simulation of the Lituya Bay mega tsunami of 1958, while taking into account uncertainties in critical parameters such as ratio of layer densities, interlayer friction and Coulomb friction angle. We use public domain topo-bathymetric data as well as the review paper [55] to approximate the Gilbert inlet topo-bathymetry.

We consider the system (3) discretized with a first order path-conservative scheme (13)-(14) whose viscosity matrix \(Q_{ij}\) is defined using the IFCP scheme described in [54]. Friction terms are discretized semi-implicitly as in [6] For simplicity, we use the first-order IFCP scheme. For fast computations, this scheme has been implemented on GPUs using CUDA. This two-dimensional scheme and its GPU adaptation and implementation using single numerical precision are described in detail in [40]. The MLMC-FV implementation has also been developed in CUDA, where all the updates of the means and variances have also been implemented using CUDA kernels.

A rectangular grid of 3,648 × 1,264 = 4,611,072 cells with a resolution of 4 m × 7.5 m has been designed in order to perform this simulation. We compute with the first order MLMC-FV-IFCP method with \(L = 4\) levels of resolution and \(M_{4} = 16\) samples for the highest level, which corresponds to the grid of 4,611,072 cells and constitutes the finest mesh. We allow a 30% of variability in the parameters \(c_{f}\), r and \(\delta_{0}\). The mean values of these parameters are \(c_{f} = 0.08\), \(r = 0.44\) and \(\delta_{0} = 13^{\circ}\). The CFL number is 0.9. Figures 4, 5, 6, 7 show the mean solution and variance for 39 s and 120 s.

Figure 4
figure 4

Mean of the solution at \(\pmb{t=39\mbox{ s}}\) .

Figure 5
figure 5

Variance of the solution at \(\pmb{t=39\mbox{ s}}\) .

Figure 6
figure 6

Mean of the solution at \(\pmb{t=120\mbox{ s}}\) .

Figure 7
figure 7

Variance of the solution at \(\pmb{t=120\mbox{ s}}\) .

The maximum run-up is reached at 39 s. We can see in Figures 4 and 5 that the southern propagating part of the initial wave reaches a maximum mean height of 50-60 m with a maximum standard deviation of 4-5 m.

While the initial wave moves through the main axis of Lituya Bay, a larger second wave appears as reflection of the first one from the south shoreline. (see Figures 6 and 7). These waves sweep both sides of the shoreline in their path. In the north shoreline, the wave reaches between 15-20 m height while in the south shoreline the wave reaches mean values between 20-30 m assuming a standard deviation of 1.2-1.5 m height.

The impact times, trimlines and the mean and variances obtained for the wave heights provided by the simulation are in good agreement with the majority of observations and conclusions described by [55]. See [56] for more details. Furthermore, we see that the computed standard deviation on account of the uncertain parameters is about 5-10% of the mean. Compared to the initial parameter uncertainty of 30% of mean, we see that the nonlinear evolution has damped the uncertainty and the problem is fairly insensitive or moderately sensitive to the three uncertain parameters. Thus, it enhances our confidence in previously reported numerical simulations of this event [56].

6.2 Earthquake generated tsunami at the Mediterranean Sea

We consider in this section an example of UQ in tsunamis generated by a earthquake. We suppose that the dip, strike and rake angle, and the slip parameter are uncertain. The computational domain is the Western Mediterranean and the epicenter of the earthquake is located in the Ionian Sea. We use the Okada’s model to determine the seafloor deformation and the initial condition for the shallow-water system (1). Uncertainty in input values for these parameters leads to uncertainty in the solution of the Okada’s model and the shallow-water system (1). Here we consider a second order discretization of system (1) over the sphere by means of a PVM path-conservative scheme in combination with a MUSCL type reconstruction operator. As in the Lituya Bay example, this model has been implemented on GPUs using CUDA as well as the MLMC-FV method. A rectangular grid of 30 arc-sec resolution has been used to perform the simulation with 7,019,520 cells and we use \(L=4\) level of resolution with \(M_{4}=64\) samples for the highest level. We consider the following variability on the parameters:

  • \(\mathit{slip}= 10 \pm2\mbox{ m}\).

  • \(\mathit{dip} = 35^{\circ}\pm 10^{\circ}\).

  • \(\mathit{strike} = 31 ^{\circ}\pm 10^{\circ}\).

  • \(\mathit{rake} = 90^{\circ}\pm 10^{\circ}\).

The CFL number is 0.5 and the friction coefficient is 0.03. Figures 8, 9, 10, 11 show the mean and variance of the free surface for \(t=2\mbox{ h}\) and \(t=4\mbox{ h}\), respectively. In this case we see that the computed standard deviation on account of the uncertain parameters is less than 0.1 m of the mean on the tsunami propagation. Maximum are located near the coastal areas and in the wave fronts, as expected. Nevertheless, as in the Lituya Bay example, the non-linear evolution has damped the uncertainty.

Figure 8
figure 8

Mean of the free surface at \(\pmb{t=2\mbox{ h}}\) .

Figure 9
figure 9

Variance of the free surface at \(\pmb{t=2\mbox{ h}}\) .

Figure 10
figure 10

Mean of the free surface at \(\pmb{t=4\mbox{ h}}\) .

Figure 11
figure 11

Variance of the solution at \(\pmb{t=4\mbox{ h}}\) .

Concerning the GPU time, the complete MLMC-FV simulation takes about 1.5 hours of computing time, that is less of the half of the real time for providing UQ on such a big problem. This example shows that MLMC-FV method is an attractive framework for UQ in real geophysical flows.

7 Conclusion

Many geophysical flows of interest, such as tsunamis generated by earthquakes, rockslides, avalanches, debris, etc. are modeled by shallow-water type systems. These models are characterized by physical parameters that are usually difficult to determine and that are prone to uncertainty. Consequently, the task of quantifying the resulting solution uncertainty (UQ) is of paramount importance in these simulations.

In this paper, we have presented a UQ paradigm by combining a path-conservative finite volume method that can accurately and robustly discretize the underlying non-conservative hyperbolic system (1) or (3) together with a Multi-level Monte Carlo statistical sampling algorithm. The algorithm is based on computing on nested sequences of mesh resolutions and estimating statistical quantities by combining results from different resolutions. The method is fully non-intrusive, easy to parallelize, fast and accurate. In particular, one can gain several orders of magnitude in computational efficiency vis a vis the standard Monte Carlo method.

We test the algorithms on a set of numerical examples on real bathymetries. The numerical results clearly indicate that the MLMC-FVM framework can approximate statistics of quantities of interest such as run-up heights, quite accurately and with reasonable computational cost under efficient implementation on GPUs. There was also qualitative agreement with experimental and observed data. Furthermore, the UQ simulations help in identifying the sensitivity of simulation outputs to the underlying uncertain parameters. Thus, they will enable an effective appraisal of sensitivity and enhance risk analysis and hazard mitigation.

The current paper illustrates the power and utility of the MLMC UQ paradigm for tsunami modeling.

References

  1. Fritz DJ, Hager WH, Minor H-E. Lituya Bay case: rockslide impact and wave run-up. Sci Tsunami Hazards. 2001;19(1):3-22.

    Google Scholar 

  2. Fritz HM, Mohammed F, Yoo J. Lituya Bay landslide impact generated mega-tsunami 50th anniversary. Pure Appl Geophys. 2009;166(1-2):153-75.

    Article  Google Scholar 

  3. Okada Y. Surface deformation due to shear and tensile faults in a half space. Bull Seismol Soc Am. 1985;75:1135-54.

    Google Scholar 

  4. Yamazaki Y, Cheung KF, Pawlak G, Lay T. Surges along the Honolulu coast from the 2011 Tohoku tsunami. Geophys Res Lett. 2012;39(9):L09604.

    Article  Google Scholar 

  5. Grilli ST, Harris JC, Bakhsh TST, Masterlark TL, Kyriakopoulos C, Kirby JT, Shi FY. Numerical simulation of the 2011 Tohoku tsunami based on a new transient FEM co-seismic source: comparison to far and near-field observations. Pure Appl Geophys. 2013;170(6-8):1333-59.

    Article  Google Scholar 

  6. Fernández ED, Bouchut F, Bresch D, Castro MJ, Mangeney A. A new Savage-Hutter type models for submarine avalanches and generated tsunami. J Comput Phys. 2008;227:7720-54.

    Article  MathSciNet  MATH  Google Scholar 

  7. Dal Maso G, Lefloch P, Murat F. Definition and weak stability of nonconservative products. J Math Pures Appl. 1995;74:483-548.

    MathSciNet  MATH  Google Scholar 

  8. Parés C, Muñoz Ruíz ML. On some difficulties of the numerical approximation of nonconservative hyperbolic systems. Bol Soc Esp Mat Apl. 2009;47:23-52.

    MathSciNet  MATH  Google Scholar 

  9. Parés C. Numerical methods for nonconservative hyperbolic systems: a theoretical framework. SIAM J Numer Anal. 2006;44(1):300-21.

    Article  MathSciNet  MATH  Google Scholar 

  10. Che S, Boyer M, Meng J, Tarjan D, Sheaffer JW, Skadron K. A performance study of general-purpose applications on graphics processors using CUDA. J Parallel Distrib Comput. 2008;68:1370-80.

    Article  Google Scholar 

  11. Owens JD, Houston M, Luebke D, Green S, Stone JE, Phillips JC. GPU computing. Proc IEEE. 2008;96:879-99.

    Article  Google Scholar 

  12. NVIDIA. NVIDIA developer zone. http://developer.nvidia.com/category/zone/cuda-zone.

  13. Khronos OpenCL Working Group. The OpenCL specification. http://www.khronos.org/opencl.

  14. Fang J, Varbanescu AL, Sips H. A comprehensive performance comparison of CUDA and OpenCL. In: 40th international conference on parallel processing (ICPP 2011). 2011. p. 216-25.

    Chapter  Google Scholar 

  15. Asunción M, Mantas JM, Castro MJ. Simulation of one-layer shallow water systems on multicore and CUDA architectures. J Supercomput. 2011;58:206-14.

    Article  Google Scholar 

  16. Brodtkorb AR, Sætra ML, Altinakar M. Efficient shallow water simulations on GPUs: implementation, visualization, verification, and validation. Comput Fluids. 2012;55:1-12.

    Article  MathSciNet  MATH  Google Scholar 

  17. Asunción M, Mantas JM, Castro MJ. Programming CUDA-based GPUs to simulate two-layer shallow water flows. In: Euro-Par 2010 - parallel processing. 2010. p. 353-64. (Lecture notes in computer science; vol. 6272).

    Chapter  Google Scholar 

  18. Castro MJ, Ortega S, Asunción M, Mantas JM, Gallardo JM. GPU computing for shallow water flow simulation based on finite volume schemes. C R, Méc. 2011;339:165-84.

    Article  MATH  Google Scholar 

  19. Message Passing Interface Forum: A Message Passing Interface Standard. University of Tennessee.

  20. Acuña MA, Aoki T. Real-time tsunami simulation on a multi-node GPU cluster [Poster]. In: ACM/IEEE conference on supercomputing 2009 (SC 2009). 2009.

    Google Scholar 

  21. Xian W, Takayuki A. Multi-GPU performance of incompressible flow computation by lattice Boltzmann method on GPU cluster. Parallel Comput. 2011;9:521-35.

    MathSciNet  Google Scholar 

  22. Viñas M, Lobeiras J, Fraguela BB, Arenaz M, Amor M, García JA, Castro MJ, Doallo R. A multi-GPU shallow-water simulation with transport of contaminants. Concurr Comput, Pract Exp. 2012;25:1153-69.

    Article  Google Scholar 

  23. Asunción M, Mantas JM, Castro MJ, Fernández-Nieto ED. An MPI-CUDA implementation of an improved Roe method for two-layer shallow water systems. J Parallel Distrib Comput. 2012;72(9):1065-72.

    Article  Google Scholar 

  24. Jacobsen DA, Senocak I. Multi-level parallelism for incompressible flow computations on GPU clusters. Parallel Comput. 2013;39:1-20.

    Article  MathSciNet  Google Scholar 

  25. Bijl H, Lucor D, Mishra S, Schwab C, editors. Uncertainty quantification in computational fluid dynamics. Heidelberg: Springer; 2014. (Lecture notes in computational science and engineering; vol. 92).

    Google Scholar 

  26. Chen QY, Gottlieb D, Hesthaven JS. Uncertainty analysis for the steady-state flows in a dual throat nozzle. J Comput Phys. 2005;204:378-98.

    Article  MathSciNet  MATH  Google Scholar 

  27. Poette G, Després B, Lucor D. Uncertainty quantification for systems of conservation laws. J Comput Phys. 2009;228:2443-67.

    Article  MathSciNet  MATH  Google Scholar 

  28. Tryoen J, Le Maitre O, Ndjinga M, Ern A. Intrusive projection methods with upwinding for uncertain non-linear hyperbolic systems. J Comput Phys. 2010;229(18):6485-511.

    Article  MathSciNet  MATH  Google Scholar 

  29. Xiu D, Hesthaven JS. High-order collocation methods for differential equations with random inputs. SIAM J Sci Comput. 2005;27:1118-39.

    Article  MathSciNet  MATH  Google Scholar 

  30. Zabaras N, Ma X. An adaptive hierarchical sparse grid collocation algorithm for the solution of stochastic differential equations. J Comput Phys. 2009;228:3084-113.

    Article  MathSciNet  MATH  Google Scholar 

  31. Tokareva S. Stochastic finite volume methods for computational uncertainty quantification in hyperbolic conservation laws. Dissertation, Nr. 21498. Eidgenossische Technische Hochschule ETH Zurich; 2013.

  32. Mishra S, Schwab C. Sparse tensor multi-level Monte Carlo finite volume methods for hyperbolic conservation laws with random initial data. Math Comput. 2012;280:1979-2018.

    Article  MathSciNet  MATH  Google Scholar 

  33. Giles M. Improved multilevel Monte Carlo convergence using the Milstein scheme. Preprint NA-06/22. Oxford Computing Lab., Oxford, UK; 2006.

  34. Giles M. Multilevel Monte Carlo path simulation. Oper Res. 2008;56:607-17.

    Article  MathSciNet  MATH  Google Scholar 

  35. Heinrich S. Multilevel Monte Carlo methods. In: Large-scale scientific computing. Berlin: Springer; 2001. p. 58-67. (Lecture notes in computer science; vol. 2170).

    Chapter  Google Scholar 

  36. Mishra S, Schwab C, Sukys J. Multi-level Monte Carlo finite volume methods for nonlinear systems of conservation laws in multi-dimension. J Comput Phys. 2012;231:3365-88.

    Article  MathSciNet  MATH  Google Scholar 

  37. Mishra S, Schwab C, Sukys J. Multilevel Monte Carlo finite volume methods for shallow water equations with uncertain topography in multi-dimensions. SIAM J Sci Comput. 2012;34(6):761-84.

    Article  MathSciNet  MATH  Google Scholar 

  38. Sánchez-Linares C, Asunción M, Castro MJ, Mishra S, Sukys J. Multi-level Monte Carlo finite volume method for shallow water equations with uncertain parameters applied to landslides-generated tsunamis. Appl Math Model. 2015;39(23-24):7211-26.

    Article  MathSciNet  Google Scholar 

  39. Bouchut F, Mangeney-Castelnau A, Perthame B, Vilotte JP. A new model of Saint-Venant and Savage-Hutter type for gravity driven shallow water flows. C R Math Acad Sci Paris. 2003;336(6):531-6.

    Article  MathSciNet  MATH  Google Scholar 

  40. Asunción M, Mantas JM, Castro MJ, Ortega S. Scalable simulation of tsunamis generated by submarine landslides on GPU clusters. Accepted on Environ Model Softw. May 2016.

  41. Volpert AI. Spaces BV and quasilinear equations. Math USSR Sb. 1967;73:255-302.

    MathSciNet  Google Scholar 

  42. Castro MJ, Fernández-Nieto ED, Morales de Luna T, Narbona-Reina G, Parés C. A HLLC scheme for nonconservative hyperbolic problems. Application to turbidity currents with sediment transport. ESAIM: Math Model Numer Anal. 2013;47(1):1-32.

    Article  MathSciNet  MATH  Google Scholar 

  43. Castro MJ, Fernández-Nieto ED, Ferreiro AM, García Rodríguez JA, Parés C. High order extensions of Roe schemes for two dimensional nonconservative hyperbolic systems. J Sci Comput. 2009;39(1):67-114.

    Article  MathSciNet  MATH  Google Scholar 

  44. Harten A, Lax PD, van Leer B. On upstream differencing and Godunov type schemes for hyperbolic conservation laws. SIAM Rev. 1983;25:35-61.

    Article  MathSciNet  MATH  Google Scholar 

  45. Degond P, Peyrard P-F, Russo G, Villedieu P. Polynomial upwind schemes for hyperbolic systems. C R Acad Sci Paris Ser I. 1999;328:479-83.

    Article  MathSciNet  Google Scholar 

  46. Castro MJ, Fernández-Nieto ED. A class of computationally fast first order finite volume solvers: PVM methods. SIAM J Sci Comput. 2012;34:A2173-A2196.

    Article  MathSciNet  MATH  Google Scholar 

  47. Castro MJ, Gallardo JM, Marquina A. A class of incomplete Riemann solvers based on uniform rational approximations to the absolute value function. J Sci Comput. 2014;60(2):363-89.

    Article  MathSciNet  MATH  Google Scholar 

  48. Castro MJ, LeFloch PG, Muñoz ML, Parés C. Why many theories of shock waves are necessary: convergence error in formally path-consistent schemes. J Comput Phys. 2008;3227:8107-29.

    Article  MathSciNet  MATH  Google Scholar 

  49. Hou TY, LeFloch PG. Why nonconservative schemes converge to wrong solutions: error analysis. Math Comput. 1994;62:497-530.

    Article  MathSciNet  MATH  Google Scholar 

  50. Muñoz ML, Parés C. On the convergence and well-balanced property of path-conservative numerical schemes for systems of balance laws. J Sci Comput. 2011;48:274-95.

    Article  MathSciNet  MATH  Google Scholar 

  51. van Leer B. Towards the ultimate conservative difference scheme. V. A second order sequel to Godunov’s method. Comput Phys. 1979;32:101-36.

    Article  Google Scholar 

  52. Gottlieb S, Shu CW. Total variation diminishing Runge-Kutta schemes. Math Comput. 1998;67:73-85.

    Article  MathSciNet  MATH  Google Scholar 

  53. Matsumoto M, Nishimura T. Mersenne twister: a 623-dimensionally equidistributed uniform pseudo-random number generator. ACM Trans Model Comput Simul. 1998;8(1):3-30.

    Article  MATH  Google Scholar 

  54. Fernández-Nieto ED, Castro MJ, Parés C. On an intermediate field capturing Riemann solver based on a parabolic viscosity matrix for the two-layer shallow water system. J Sci Comput. 2011;48(1-3):117-40.

    Article  MathSciNet  MATH  Google Scholar 

  55. Miller DJ. Giant waves in Lituya Bay, Alaska: a timely account of the nature and possible causes of certain giant waves, with eyewitness reports of their destructive capacity. Professional paper; 1960.

  56. Asunción M, Castro MJ, González JM, Macías J, Ortega S, Sánchez C. Modeling the Lituya Bay landslide-generated mega-tsunami with a Savage-Hutter shallow water coupled model. NOAA Technical memorandum; 2013.

Download references

Acknowledgements

This research has been partially supported by the Spanish Government Research project MTM2012-38383-C02-01 and MTM2015-70490-C2-1-R, Andalusian Government Research projects P11-FQM-8179 and P11-RNM-7069 and the Swiss Federal Institute of Technology, ETH (Zurich). The work of SM was partially supported by the ERC STG NN. 306279, SPARCCLE.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Manuel J Castro.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

MA has implemented the finite volume solvers in GPU. CSL has implemented the MLMC algorithm. JM and JMGV has contributed on the design of the numerical tests of Lituya Bay and Earthquake generated tsunami at the Mediterranean Sea. MJC has designed the finite volume solver for the different models and has helped to draft the manuscript, and SM has helped to draft the manuscript. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sánchez-Linares, C., de la Asunción, M., Castro, M.J. et al. Uncertainty quantification in tsunami modeling using multi-level Monte Carlo finite volume method. J.Math.Industry 6, 5 (2016). https://doi.org/10.1186/s13362-016-0022-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13362-016-0022-8

Keywords