 Research
 Open access
 Published:
Uncertainty quantification in tsunami modeling using multilevel Monte Carlo finite volume method
Journal of Mathematics in Industry volume 6, Article number: 5 (2016)
Abstract
Shallowwater type models are commonly used in tsunami simulations. These models contain uncertain parameters like the ratio of densities of layers, friction coefficient, fault deformation, etc. These parameters are modeled statistically and quantifying the resulting solution uncertainty (UQ) is a crucial task in geophysics. We propose a paradigm for UQ that combines the recently developed pathconservative spatial discretizations efficiently implemented on cluster of GPUs, with the recently developed MultiLevel Monte Carlo (MLMC) statistical sampling method and provides a fast, accurate and computationally efficient framework to compute statistical quantities of interest. Numerical experiments, including realistic simulations in real bathymetries, are presented to illustrate the robustness of the proposed UQ algorithm.
1 Introduction
A tsunami is a series of powerful water waves generated by different mechanism as earthquakes, volcanic eruptions, underwater landslides as well as local landslides along the coast. As emphasized by the recent tragic events in March 2011 in Japan and in December 2004 in Indonesia, tsunamis may be extremely catastrophic: they are able to destroy buildings, roads and generally the infrastructure can be seriously affected. But the most tragic part is that tsunamis can lead to the loss of human lives. A deep knowledge of tsunamis is required in order to predict the maximum runups and rundowns, and also to give early warning messages to the regions that may be affected.
Since the most common sources for tsunamis are earthquakes, earthquakegenerated tsunamis have been extensively investigated. Landslidegenerated tsunamis have been much less studied and the existing knowledge about them is more limited. They are characterized by relatively short periods, compared to the earthquakegenerated ones, and they do not travel as long distances as the earthquakegenerated tsunamis do. Therefore, one of their characteristics is that their whole life cycle takes place near the source. Nevertheless, they can reach high amplitudes and can also become extremely harmful (see [1, 2]).
The numerical simulation of a tsunami has three stages, i.e., generation, propagation and inundation. In the generation stage of earthquakegenerated tsunamis, Okada’s finite fault deformation model (see [3]) is widely used as the initial method to predict the initial sea surface displacement of a tsunami. This method assumes that an earthquake can be regarded as the rupture of a single fault plane. This fault is described by a series of physical parameters, comprising dip angle, strike angle, rake angle, fault width, fault length, and fault depth. Okada’s vertical displacement is applied to generate tsunami wave with initialized seasurface elevation instantaneously, or drive the model at a specific rupture time (e.g., see [4]). Recently, tsunamiwave generations independent of the Okada’s assumption are also being developed and evaluated, in which a 3D finite element model is employed (see [5]).
In the propagation and inundation stage, two main types of governing equations are commonly employed: the Boussinesq type equations or the nonlinear shallowwater equations. In this work, earthquakegenerated tsunamis are driven by the instantaneous sea surface disturbance derived from Okada’s finite fault model and the propagation and inundation stages are simulated with the use of 2D nonlinear shallowwater equations.
Landslides generated tsunamis are modeled here by the use of a nonlinear twolayer SavageHutter type model introduced in [6], that it is able to reproduce the generation of the tsunami by the impact of the landslide, the propagation of the tsunami waves and the inundation produced by those waves.
It is well known that solutions of both systems take the form of waves that propagate at a finite speed. Furthermore, the solutions might form discontinuities such as shocks, hydraulic jumps, etc., even when the initial data are smooth. Thus, it is customary to interpret the solutions of such nonlinear PDEs in the sense of distributions. There are innate difficulties in defining such weak solutions for systems that are not in the conservation form, due to the presence of geometrical source terms or nonconservative products, as it is the case here. For such systems, special theories such as those in [7] have been proposed. Moreover, weak solutions are not necessarily unique and further admissibility criteria need to be imposed in order to single out a physically relevant solution.
Various types of numerical methods have been designed to approximate these convectiondominated nonlinear hyperbolic PDEs efficiently. Methods such as finite volume, finite difference and discontinuous Galerkin finite element schemes are widely used. In particular, the approximation of nonconservative systems is quite involved as the right jump conditions across discontinuities need to be approximated [8]. An attractive framework to deal with such problems is the one of path conservative numerical schemes developed in [9].
In recent years, efficient implementation of such schemes have been carried out using Graphics Processing Units (GPUs). GPUs have proved to be a powerful accelerator for intensive scientific simulations. The high memory bandwidth and massive parallelism of these platforms make it possible to achieve dramatic speedups over a standard CPU in many applications [10, 11], and several programming toolkits and interfaces, such as NVIDIA CUDA [12] and Open Computing Language (OpenCL) [13], have shown a high effectiveness in the mapping of data parallel applications to GPUs [10, 14]. Currently most of the proposals to simulate shallow flows on a single GPU are based on the CUDA programming model. There are several proposals of finite volume CUDA solvers to simulate onelayer shallow water flows over structured regular meshes [15, 16] and for the twolayer shallow water system [17, 18]. Moreover, realistic tsunami simulations involve huge meshes, many time steps and possibly real time accurate predictions. These characteristics suggest to use a cluster of GPUenhanced computers in order to scale the runtime reduction and overcome the memory limitations of a GPUenhanced node by suitably distributing the data among the nodes, enabling us to simulate significantly larger realistic models. Most of the proposals to exploit GPU clusters in computational fluid dynamics (CFD) simulations use CUDA to program each GPU, and MPI [19] to implement interprocess communication, and they use nonblocking communication MPI functions to overlap the remote transfers with GPU computation [20–24].
Numerical methods to approximate these nonlinear hyperbolic PDEs (or for that matter any PDE) require inputs such as the initial data, boundary conditions and coefficients in the fluxes, sources and friction terms of the PDE. These inputs need to be measured. Measurements are marked by uncertainty. For instance, let us consider tsunami modeling. In such problems, the initial conditions are typically estimated from a very uncertain measurement process: it is very difficult to estimate the exact fault deformation or the initial position and velocity of a landslide. This uncertainty in determining the inputs to the PDE is propagated into the solution. The calculation of solution uncertainty, given input uncertainty, falls under the rubric of uncertainty quantification (UQ). UQ for geophysical flows is vitally important for risk evaluation and hazard mitigation.
Although various approaches to modeling input uncertainty exist, the most popular framework models input uncertainty statistically in terms of random parameters and fields. The resulting PDE is a stochastic (random) PDE. The solution has to be sought for in a stochastic sense and statistical quantities such as the mean, the variance, higher moments, confidence intervals and the probability distribution function (pdfs) of the solution are the objects of interest.
The modeling and computation of solution statistics is highly nontrivial. Challenges include possibly large number of random variables (fields) to parametrize the uncertain input and the sheer computational challenge of evaluating statistical moments that might require a very large number of PDE solves. The challenges are particularly accentuated for hyperbolic and convectiondominated PDEs as the discontinuities in physical space such as shocks can propagate into stochastic space resulting in a loss of regularity of the underlying solution with respect to the random parameters. A very large number of degrees of freedom in the stochastic space might be needed to resolve such functions with possible singularities. See a recent review [25] for a detailed account of the challenges involved in UQ for hyperbolic problems.
Nevertheless, several numerical methods have been developed for UQ in hyperbolic PDEs. See for instance [26–30] and the review [25] for details. Methods include the stochastic Galerkin methods based on generalized Polynomial Chaos (gPC), stochastic collocation methods and stochastic finite volume methods (SFVM). Stochastic Galerkin methods are based on expanding the sought for solution random field in terms of basis functions, orthogonal with respect to the underlying probability distribution and termed as gPC or generalized polynomial chaos. Projecting the resulting expansion into this orthonormal basis yields a (possibly very large) system of PDEs for the underlying coefficients. Moments of the solution random field can be readily obtained from the coefficients. Such an approach has been used for instance in [28] and references therein. However, this approach suffers from several deficiencies. Perhaps the biggest drawback of this approach in the context of nonlinear hyperbolic PDE lies in the fact that the resulting system of PDEs for the gPC coefficients is not necessarily hyperbolic and may not even be wellposed. A novel solution to this problem was provided in [27] where the authors proposed a gPC expansion of the solution random field in terms of the entropy variables. If the underlying nonlinear conservation law possesses a strictly convex entropy, then one can show that the resulting nonlinear system for gPC coefficients is hyperbolic. However, a large number of terms of this expansion might still be necessary for hyperbolic PDEs with low spatial and stochastic regularity leading to very computationally costly solution of a large system of PDEs for the gPC coefficients. Furthermore, this method is computationally intrusive i.e., completely new code has to written from scratch in order to compute the chaos coefficients and existing codes cannot be reused. Hence, it appears that the stochastic Galerkin method is only suitable for hyperbolic problems with a very low number of uncertain (stochastic) parameters (dimensions).
An alternative set of methods are of the stochastic collocation type [29, 30] and references therein, where the solution is sampled at a deterministic set of sample points in the entire stochastic space. These methods are nonintrusive. However, they are of limited utility as one needs sufficient regularity with respect to stochastic variables in order to employ tools such as sparse grids to keep the computational cost feasible. Unfortunately, solutions of uncertain nonlinear hyperbolic PDEs are not sufficiently regular. A possible alternative is the recently proposed stochastic finite volume method [31] and references therein. However, it is also limited to a low to moderate number of stochastic parameters.
Another class of methods are the socalled Monte Carlo (MC) methods in which the probability space is sampled, the underlying deterministic PDE is solved for each sample and the samples are combined to determine statistical information about the random field. Although nonintrusive, easy to code and to parallelize, MC methods converge at rate \(1/2\) as the number M of MC samples increases. The asymptotic convergence rate \(M^{1/2}\) is nonimprovable by the central limit theorem.
Therefore, MC methods require a large number of ‘samples’ (with each ‘sample’ involving the numerical solution of the underlying PDE with a given draw of parameter values) in order to ensure low statistical errors. This slow convergence entails high computational costs for MC type methods and makes them infeasible for computing uncertainty in complex shallow water flows. We refer to [32] for a detailed error and computational complexity analysis for the MC method in the context of scalar conservation laws. This slow convergence has inspired the development of MultiLevel Monte Carlo or MLMC methods [33–35], in which one consider a nested sequence of spacetime grids and draws different number of samples from each grid. In particular, very few samples are drawn from the finest grids (with the highest computational cost) and very large number of samples are drawn from the coarsest grids (with very low computational cost). This subtle balancing of stochastic error with spatiotemporal error, together with a novel MLMC estimator for the statistical moments are the key ingredients in the successful adaptation of these methods to different UQ contexts. In particular, [32] and [36] extend and analyze the MLMC algorithm for scalar conservation laws and for systems of conservation laws, respectively. The asymptotic analysis for the MLMC method, presented in [32], showed that the method allows the computation of approximate statistical moments with far lower computational cost than the underlying MC approximation. MLMC methods have been successfully used in compressible fluid dynamics and magnetohydrodynamics and geophysical flows [37], and it has been shown to be efficient in performing UQ for realistic geophysical flows in [38]. Currently, MLMC methods appear to be one of the most suitable methods for UQ in the context of nonlinear hyperbolic PDEs.
Our main aim in this work is to perform an efficient UQ in tsunami modeling by the combination of shallowwater type models and MLMC method and the intensive use of GPUs. The rest of the paper is organized as follows: we present the shallowwater type models commonly used in tsunami simulations in Section 2 and the high order pathconservative schemes to approximate them in Section 3. The Monte Carlo and Multilevel Monte Carlo methods are described in Sections 4 and 5, respectively, and, finally, numerical results are presented in Section 6.
2 Shallowwater type models for tsunami modeling
Let us consider first the well known 2D nonlinear onelayer Shallowwater system:
being
In the previous system, \(h(\mathbf {x},t)\), denotes the thickness of the water layer at point \(\mathbf {x}\in D \subset\mathbb{R}^{2}\) at time t, being D the horizontal projection of the 3D domain where the tsunami takes place. \(H(\mathbf {x})\) is the depth of the bottom at point \((\mathbf {x})\) measured from a fixed level of reference. Let us also define the function \(\eta(\mathbf {x},t)=h(\mathbf {x},t)H(\mathbf {x})\) that corresponds to the free surface of the fluid. Let us denote by \(\mathbf {q}(\mathbf {x},t) = ( q_{x}(\mathbf {x},t), q_{y}(\mathbf {x},t) )\) the massflow of the water layer at point x at time t. The massflow is related to the heightaveraged velocity \(\mathbf {u}(\mathbf {x},t)\) by means of the expression: \(\mathbf {q}(\mathbf {x},t) = h(\mathbf {x},t) \mathbf {u}(\mathbf {x},t)\).
The term \(S_{F}(U) \) parametrizes the friction effects and it is given in term of the Manning law:
where \(n> 0\) is the Manning coefficient.
An interesting stationary solution of the previous system corresponds to water at rest solution given by
As pointed out in the Introduction, in the generation stage of earthquakegenerated tsunamis, an earthquake subfault model is used to predict the initial sea surface displacement. Typically, those models are given in the form of a set of rectangular patches on the fault plane. Each patch has a set of parameters defining the relative slip of rock on one side of the planar patch to slip on the other side. The minimum set of parameters required is:

the length and width of the fault plane (typically in m or km),

the latitude and longitude of some point on the fault plane, typically either the centroid or the center of the top (shallowest edge),

the depth of the specified point below the sea floor,

the strike angle, that is, the orientation of the top edge, measured in degrees clockwise from North and takes values between 0 and 360. The fault plane dips downward to the right when moving along the top edge in the strike direction,

the dip angle that is the angle at which the plane dips downward from the top edge. It is a positive angle between 0 and 90 degrees,

the rake angle, that is, the angle in the fault plane in which the slip occurs, measured in degrees counterclockwise from the strike direction and takes values between −180 and 180. Note that for a strikeslip earthquake, the rake angle is near to 0 or 180. For a subduction earthquake, the rake angle is usually closer to 90 degrees,

the slip distance is the distance (typically in cm or m) of the hanging block that moves relative to the foot block, in the direction specified by the rake angle. The ‘hanging block’ is the one above the dipping fault plane (or to the right if you move in the strike direction). It is always a positive quantity.
The slip on the fault plane(s) must be translated into seafloor deformation. This is often done using the Okada’s model [3], which is derived from a Green’s function solution to the elastic half space problem. Uniform displacement of the solid over a finite rectangular patch specified using the parameters described above, leads to a steady state solution in which the seafloor is deformed. This deformation is transmitted instantaneously to the free surface, generating an initial condition for the shallowwater system. Note that Okada’s model is a rough approximation since the actual seafloor in rarely flat, and the actual earth is not an homogeneous isotropic elastic material as assumed in this model. However, it is often assumed to be a reasonable approximation for the free surface displacement in tsunami simulations, particularly since the fault slip parameters are generally not known very well even for historical earthquakes and so a more accurate modeling of the resulting seafloor deformation may not be justified.
2.1 A twolayer SavageHutter type model for simulating landslides generated tsunamis
In [6] a model for the simulation of tsunamis generated by submarine landslides was presented for 1D geometries. Here, we consider its natural extension to 2D domains, such as problems with real bathymetries could be simulated. Following [6], we consider a stratified media composed by a non viscous and homogeneous fluid with constant density \(\rho_{1}\) (water) and a fluidized granular material with density \(\rho_{s}\) and porosity \(\psi_{0}\). We suppose that the fluid and the granular material are immiscible and that the mean density of the granular material is given by: \(\rho_{2} = (1  \psi_{0}) \rho_{s} + \psi_{0} \rho_{1}\). The following 2D system is derived under the assumption of shallowflows and could be used to simulate the interaction of a granular landslide with the ambient water (see [6] for details about its derivation in 1D problems):
being
In the previous system, \(h_{l}(\mathbf {x},t)\), \(l=1,2\) denotes the thickness of the water layer (\(l=1\)) and the granular material (\(l=2\)), respectively, at point \(\mathbf {x}\in D \subset\mathbb{R}^{2}\) at time t, being D the horizontal projection of the 3D domain where the landslide and tsunami takes place. \(H(\mathbf {x})\) is the depth of the bottom at point x measured from a fixed level of reference. Let us also define the function \(\eta_{1}(\mathbf {x},t)=h_{1}(\mathbf {x},t)+h_{2}(\mathbf {x},t)H(\mathbf {x})\) that corresponds to the free surface of the fluid, and \(\eta_{2}(\mathbf {x},t)=h_{2}(\mathbf {x},t)H(\mathbf {x})\), the interface between the granular layer and the fluid. Let us denote by \(\mathbf {q}_{l}(\mathbf {x},t) = ( q_{l,x}(\mathbf {x},t), q_{l,y}(\mathbf {x},t) )\) the massflow of the llayer at point x at time t. The massflow is related to the heightaveraged velocity \(\mathbf {u}_{l}(\mathbf {x},t)\) by means of the expression: \(\mathbf {q}_{l}(\mathbf {x},t) = h_{l}(\mathbf {x},t) \mathbf {u}_{l}(\mathbf {x},t)\), \(l = 1, 2\). \(r = \rho_{1} / \rho_{2}\) is the ratio of the constant densities of the layers (\(\rho_{1} < \rho_{2}\)).
The terms \(S_{f_{k}} (U) \), \(k= 1, \ldots, 4\), model the different friction effects, while \(\boldsymbol {\tau }= (\tau_{x}, \tau_{y}) \) is the Coulomb friction law. \(S_{f_{k}} (U)\), \(k = 1, \ldots, 4\), are given by:
\(S_{c}(U) = ( S_{c_{x}}(U), S_{c_{y}}(U) )\) parameterizes the friction between the two layers, and is defined as:
where \(m_{f}\) is a positive constant.
\(S_{l}(U) = ( S_{l_{x}}(U), S_{l_{y}}(U) )\), \(l=1,2\) parameterizes the friction between the fluid and the nonerodible bottom (\(l=1\)) and between the granular material and the nonerodible bottom (\(l=2\)), and both are given by a Manning law
where \(n_{l} > 0\) (\(l=1,2\)) is the Manning coefficient. Note that \(S_{1}(U)\) is only defined where \(h_{2}(x,y,t) = 0\). In this case, \(m_{f} = 0\) and \(n_{2} = 0\). Similarly, if \(h_{1}(x,y,t) = 0\) then \(m_{f} = 0\) and \(n_{1} = 0\).
Finally, the Coulomb friction term \(\boldsymbol {\tau }= (\tau_{x}, \tau_{y})\) controls the stopping mechanism of the landslide and it is defined as follows:
where \(\sigma^{c} = g (1r) h_{2} \tan(\alpha)\), being α the Coulomb friction angle. Let us remark that r is set to zero in \(\sigma^{c}\) and τ, if \(h_{1}(x,y,t)=0\), that is, if it is an aerial landslide.
Note that the previous model reduces to the usual onelayer shallowwater system if \(h_{2}=0\) and to the SavageHutter model if \(h_{1}=0\).
Finally, some stationary solutions of interest for the above system are those of water at rest, i.e., \({\mathbf {u}}_{l}={\mathbf{0}}\), \(l=1,2\) are given by:
and, in particular,
are solutions of the previous system.
Notice that both systems (1) and (3) can be rewritten as
by considering \(W=[ U,H]^{T}\) and
where \(J_{i}(U)=\frac{\partial F_{i}}{\partial U} (U)\), \(i=1,2\) denote the Jacobians of the fluxes \(F_{i}\), \(i=1,2\) and
The term \(\widetilde{S}_{F}(W)\), that corresponds to the different parameterizations of the friction terms, will be discretized semiimplicitly as in [6, 39] or [40]. Therefore, at this stage will be neglected, and we consider the homogeneous system
where \(W(\mathbf{x},t)\) takes values on a convex domain Ω of \(\mathbb{R}^{N}\) and \(\mathcal{A}_{i}\), \(i=1,2\), are two smooth and locally bounded matrixvalued functions from Ω to \(\mathcal {M}_{N \times N}(\mathbb{R})\). We also assume that (7) is strictly hyperbolic, i.e. for all \(W \in\Omega\) and \(\forall{ \boldsymbol {\eta }}=(\eta_{x},\eta _{y}) \in\mathbb{R}^{2}\), the matrix
has N real and distinct eigenvalues
and \(\mathcal{A}(W,\boldsymbol {\eta })\) is thus diagonalizable.
The nonconservative products \(\mathcal{A}_{1}(W)W_{x}\) and \(\mathcal {A}_{2}(W)W_{y}\) do not make sense as distributions if W is discontinuous. However, the theory developed by Dal Maso, LeFloch and Murat in [7] allows to give a rigorous definition of nonconservative products as bounded measures provided that a family of Lipschitz continuous paths \(\Phi\colon[0,1]\times\Omega\times \Omega\times\mathcal{S}^{1}\to\Omega\) is prescribed, where \(\mathcal {S}^{1} \subset\mathbb{R}^{2}\) denotes the unit sphere. This family must satisfy certain natural regularity conditions, in particular:

1.
\(\Phi(0;W_{L},W_{R}, \boldsymbol {\eta })=W_{L}\) and \(\Phi(1;W_{L},W_{R}, \boldsymbol {\eta })=W_{R}\), for any \(W_{L},W_{R}\in\Omega\), \(\boldsymbol {\eta }\in\mathcal{S}^{1}\).

2.
\(\Phi(s;W_{L},W_{R}, \boldsymbol {\eta }) = \Phi(1s;W_{R},W_{L}, \boldsymbol {\eta })\), for any \(W_{L},W_{R}\in\Omega\), \(s \in[0,1]\), \(\boldsymbol {\eta }\in\mathcal{S}^{1}\).
The choice of this family of paths should be based on the physics of the problem: for instance, it should be based on the viscous profiles corresponding to a regularized system in which some of the neglected terms (e.g. the viscous terms) are taken into account. Unfortunately, the explicit calculations of viscous profiles for a regularization of (7) is in general a difficult task. A detailed description of how paths can be chosen is discussed in [8]. An alternative is to choose the ‘canonical’ choice given by the family of segments:
that corresponds to the definition of nonconservative products proposed by Volpert (see [41]). As shown in [42], segments paths is a sensible choice as the it will provide third order approximation in the phase plane of the correct jump condition.
Suppose that a family of paths Φ in Ω has been chosen. Then a piecewise regular function W is a weak solution of (7) if and only if the two following conditions are satisfied:

(i)
W is a classical solution where it is smooth.

(ii)
At every point of a discontinuity W satisfies the jump condition
$$ \int_{0}^{1} \bigl(\sigma\mathcal{I} \mathcal{A} \bigl(\Phi \bigl(s;W^{},W^{+}, \boldsymbol {\eta }\bigr), \boldsymbol {\eta }\bigr) \bigr) \frac {\partial\Phi}{\partial s} \bigl(s;W^{},W^{+}, \boldsymbol {\eta }\bigr)\,ds=0, $$(9)where \(\mathcal{I}\) is the identity matrix; σ, the speed of propagation of the discontinuity; η a unit vector normal to the discontinuity at the considered point; and \(W^{}\), \(W^{+}\), the lateral limits of the solution at the discontinuity.
As in conservative systems, together with the definition of weak solutions, a notion of entropy has to be chosen. We will assume here that the system can be endowed with an entropy pair \((\mathcal{H}, \mathbf{G})\), i.e. a pair of regular functions \(\mathcal{H}: \Omega \to\mathbb{R}\) and \(\mathbf{G} = (G_{1}, G_{2}): \Omega\to\mathbb{R}^{2}\) such that:
Then, a weak solution is said to be an entropy solution if it satisfies the inequality
in the sense of distributions.
3 Highorder finite volume schemes
To discretize (7) the computational domain D is decomposed into subsets with a simple geometry, called cells or finite volumes: \(V_{i} \subset \mathbb {R}^{2}\). It is assumed that the cells are closed convex polygons whose intersections are either empty, a complete edge or a vertex. Denote by \(\mathcal{T}\) the mesh, i.e., the set of cells, and by NV the number of cells. In this work, we consider rectangular structured meshes, but the derivation of the scheme is done for arbitrary meshes.
Given a finite volume \(V_{i}\), \(\vert V_{i}\vert\) will represent its area; \(N_{i} \in \mathbb{R}^{2}\) its center; \(\mathcal{N}_{i}\) the set of indexes j such that \(V_{j}\) is a neighbor of \(V_{i}\); \(E_{ij}\) the common edge of two neighboring cells \(V_{i}\) and \(V_{j}\), and \(\vert E_{ij} \vert\) its length; \(d_{ij}\) the distance from \(N_{i}\) to \(E_{ij}\); \(\boldsymbol {\eta }_{ij}=(\eta_{ij,x},\eta_{ij,y})\) the normal unit vector at the edge \(E_{ij}\) pointing towards the cell \(V_{j}\); \(W_{i}^{n}\) the constant approximation to the average of the solution in the cell \(V_{i}\) at time \(t^{n}\) provided by the numerical scheme:
Given a family of paths Φ, a Roe linearization of system (7) is a function
satisfying the following properties for each \(W_{L}, W_{R}\in\Omega\) and \(\boldsymbol {\eta }\in S^{1}\):

1.
\(\mathcal{A}_{\Phi}(W_{L},W_{R}, \boldsymbol {\eta })\) has N distinct real eigenvalues
$$\lambda_{1} (W_{L},W_{R}, \boldsymbol {\eta })< \lambda_{2}(W_{L},W_{R}, \boldsymbol {\eta }) < \cdots< \lambda_{N}(W_{L},W_{R}, \boldsymbol {\eta }). $$ 
2.
\(\mathcal{A}_{\Phi}(W,W, \boldsymbol {\eta })=\mathcal{A}(W, \boldsymbol {\eta })\).

3.
$$\begin{aligned}& \mathcal{A}_{\Phi}(W_{L},W_{R}, \boldsymbol {\eta })\cdot(W_{R}W_{L}) \\& \quad =\int_{0}^{1}\mathcal{A} \bigl(\Phi(s;W_{L},W_{R}, \boldsymbol {\eta }), \boldsymbol {\eta }\bigr) \frac{\partial\Phi}{\partial s}(s;W_{L},W_{R}, \boldsymbol {\eta })\,ds. \end{aligned}$$(10)
Note that in the particular case in which \(\mathcal{A}_{k}(W)\), \(k=1,2\), are the Jacobian matrices of smooth flux functions \(F_{k}(W)\), property (10) does not depend on the family of paths and reduces to the usual Roe property:
for any \(\boldsymbol {\eta }\in S^{1}\), where
Given a Roe matrix \(\mathcal{A}_{\Phi}(W_{L},W_{R},\boldsymbol {\eta })\), let us define a decomposition of it as follows:
where \(\mathcal{Q}_{\Phi}(W_{L},W_{R}, \boldsymbol {\eta })\) is a definite positive matrix and could be seen as the viscosity matrix associated to the method.
Now, it is straightforward to define a pathconservative scheme in the sense defined in [9] based on the previous decomposition:
where \(\widehat{\mathcal{A}}_{ij}^{}= \widehat{\mathcal{A}}^{}_{\Phi}(W_{i},W_{j}, \boldsymbol {\eta }_{ij})\).
Moreover, taking into account the structure of the matrix \(\mathcal {A}_{\Phi}(W_{L},W_{R}, \boldsymbol {\eta })\), it is possible to rewrite (12) for the system (1) or (3) as follows:
where
where the path is supposed to be given by \(\Phi=(\Phi_{U} \Phi_{H})^{T}\) and
with
with
The matrix \(A_{ij}\) is defined as follows
where \(J_{ij}\) is a Roe matrix for the flux \(F_{\boldsymbol {\eta }}(U)\), that is
Considering segment as paths, it is straightforward to compute the matrices \(B_{ij}\), \(J_{ij}\) and the vector \(S_{ij}\) for system (1) and (3) (see [43] for the detailed expressions).
Finally, in order to fully define the numerical scheme (13)(14), the matrix \(Q_{ij}\), that plays the role of the viscosity matrix, should be defined. Thus, Roe method is obtained if \(Q_{ij}=\vert A_{ij}\vert\). Note that with this choice one needs to perform the complete spectral decomposition of matrix \(A_{ij}\). In many situations, as in the case of the multilayer shallowwater system, it is not possible to obtain an analytical expression of the eigenvalues and eigenvectors, and some numerical algorithm should be used to perform the spectral decomposition of matrix \(A_{ij}\), increasing the computational cost of the Roe method.
A rough approximation to this problem is given by the local LaxFriedrichs (or Rusanov) method, in which the matrix \(Q_{ij}\) could be seen as an approximation of \(\vert A_{ij}\vert\) given by a diagonal matrix defined in terms of the largest eigenvalue of \(A_{ij}\) in absolute value. However, this approach gives excessive dissipation for all the waves corresponding to the other eigenvalues. At the beginning of the eighties, Harten, Lax and van Leer [44] realized that \(\vert A_{ij}\vert\) could be computed as a polynomial evaluation \(p(A_{ij})\), where \(p(x)\) interpolates \(\vert x\vert\) at the eigenvalues of \(A_{ij}\). Then they proposed to use a linear interpolation based on the smallest and largest eigenvalues of \(A_{ij}\), which results in a considerable improvement over local LaxFriedrichs at a computational cost much lower than the one required for the full computation of \(\vert A_{ij}\vert\). This idea is the basis of the HLL method.
To our knowledge, the paper by Degond et al. [45] contains the first attempt to construct a simple approximation of \(\vert A_{ij}\vert\) by means of a polynomial that approximates \(\vert x\vert\) without interpolating it exactly on the eigenvalues. This approach has been extended to a general framework in a recent paper [46], where the authors introduce the socalled PVM (Polynomial Viscosity Matrix) methods, which are defined in terms of viscosity matrices based on general polynomial evaluations of a given Roe matrix or the Jacobian of the flux at some other average value.
Stability issues require that the graph of the polynomial defining a PVM method must be over the graph of the absolute value function. Moreover, the behavior of a PVM scheme will be closer to that of Roe’s method as its basis polynomial is closer to \(\vert x\vert\) in the uniform norm. This fact suggests the idea of using accurate approximations of \(\vert x\vert\) to build PVM schemes that give comparable results to Roe’s method, but with a much smaller computational cost. Following this idea, in [47] authors propose a new PVM scheme based on Chebyshev polynomials, which provide optimal uniform approximations to \(\vert x\vert\). Moreover, the order of approximation to \(\vert x\vert\) can be greatly improved by using rational functions instead of polynomials. This allows us to define a new family of schemes called RVM (Rational Viscosity Matrix) using appropriate rational functions to construct the viscosity matrices \(Q_{ij}\). It is important to point out that RVM methods constitute a class of generalpurpose Riemann solvers, that are constructed using a Roe matrix \(A_{ij}\) (or more generally the matrix \(A(U, \boldsymbol {\eta })\) evaluated at some average computed from the right and left states) and an estimate of its spectral radius, without making use of the spectral decomposition of the Roe matrix.
Concerning the convergence of pathconservative schemes in presence of nonconservative products, in [48] and [8] it has been proved that, in general, the numerical solutions provided by a pathconservative numerical scheme converge to functions which solve a perturbed system in which an error sourceterm appears on the righthand side. The appearance of this source term, which is a measure supported on the discontinuities, has been first observed in [49] when a scalar conservation law is discretized by means of a nonconservative numerical method. Nevertheless, in certain special situations the convergence error vanishes for finite difference methods: this is the case for systems of balance laws (see [50]). Moreover for more general problems, even when the convergence error is present, it may be only noticeable for very fine meshes, for discontinuities of large amplitude, and/or for largetime simulations: see [8, 48] for details.
Finally, as usual, a CFL condition must be imposed to ensure stability:
with \(0<\delta\leq1\).
3.1 Highorder extension
Following [43], the semidiscrete expression of the highorder extension of scheme (13)(14), based on a given conservative reconstruction operator, is the following:
where \(P^{t}_{i}\) is the reconstruction approximation function at time t of \(U_{i}(t)\) at cell \(V_{i}\) defined using the stencil \(\mathcal{B}_{i}\):
and \(P^{H}_{i}\) is the reconstruction approximation function of H. The functions \(U_{ij}^{\pm}(\gamma,t)\) are given by
and \(H_{ij}^{\pm}(\gamma)\) are given by
In practice, the integral terms in (16) must be approximated numerically using high order quadrature formula, that must be related to the order of approximation of the reconstruction operator (see [43] for more details). Here, a MUSCL type reconstruction operator ([51]) that achieves second order accuracy is used. For time stepping, second order highorder TVD RungeKutta method described in [52] is used.
The wellbalancedness properties of scheme (16) and the relation between the reconstructed variables, the reconstruction operators and the quadrature formulas have been analyzed in [43].
4 Monte Carlo method
4.1 Modeling uncertain inputs
As mentioned in the Introduction, it is very difficult to measure in a precise way some of the physical parameters present in the systems (1) or (3). This is the case of the parameters present in the friction terms or the ratio of densities in a nonhomogeneous media, or the parameters describing the fault deformation in the Okada’s model. Uncertainty in input values for these parameters leads to uncertainty in the solution of the system (1) or (3). Therefore, noting by \((\Sigma,F, \mathbb {P})\) the complete probability space, \(U(t,\mathbf {x},\xi)\), \(\xi\in\Sigma\) is the solution of the system
4.2 Monte Carlo finite volume method
In order to approximate the random system of equations (17), we need to discretize the probability space. The simplest sampling method is the Monte Carlo (MC) algorithm that consists of the following steps:

1.
Sample: We draw M independent identically distributed (i.i.d.) samples \(\xi_{k}\) of the random fields.

2.
Solve: For each realization of the parameters \(\xi_{k}\), the underlying system is solved. Let the finite volume solutions be denoted by \(U_{\mathcal {T}}^{k,n}\), i.e. by cell averages \(\{U_{i}^{k,n}: V_{i}\in \mathcal {T}\}\) at time level \(t^{n}\),
$$U_{\mathcal {T}}^{k,n}(x)=U_{i}^{k,n},\quad \forall x\in V_{i}, V_{i}\in \mathcal {T}. $$ 
3.
Estimate statistics: We estimate the expectation of the random solution field with the sample mean (ensemble average) of the approximate solution:
$$ E_{M} \bigl[U_{\mathcal {T}}^{n} \bigr]:= \frac{1}{M} \sum_{k=1}^{M} U_{\mathcal {T}}^{k,n}. $$(18)Higher statistical moments can be approximated analogously (see [32]).
The above algorithm is quite simple to implement. We remark that step 1 requires a (pseudo) random number generator (PRNG). In this work we will use the Mersenne Twister PRNG [53], which has a period of \(2^{19{,}937}1\). In step 2, an existing code can be used. Furthermore, the only (data) interaction between different samples is in step 3 when ensemble averages are computed. Thus, the MC is nonintrusive as well as easily parallelizable.
Although a rigorous error estimate for the MC approximating the shallowwater systems (1) or (3) is currently out of reach, we rely on the analysis for a scalar conservation law (see [32]) and on the numerical experience with the MLMCFV solution of nonlinear hyperbolic systems of conservation laws with random initial data (see [36]) to postulate that the following estimate holds if the solution has finite second moments:
Here, the \(L^{2}(\Sigma;L^{1}(D))\)norm of the random function \(f(\cdot ,\xi)\) is defined as
and \(C_{\mathrm{stat}}\), \(C_{st}\) are constants that depend on the domain D, the initial condition, topography, time horizon T and the statistics of different random parameters, in particular, on mean and variance. In the above, we have assumed that the underlying finite volume scheme converges to the solutions of the deterministic system (1) or (3) at a rate of \(s > 0\). Moreover, in (19) and throughout the following, we adopted the (customary in the analysis of MC methods) convention to interpret the MC samples \(U_{\mathcal {T}}^{k,n}\) in (18) as i.i.d. random functions, with the same law as U. Based on the error analysis of [32], we need to choose
in order to equilibrate the statistical error with the spatiotemporal error in (19).
Consequently, it is straightforward to deduce that the asymptotic error vs. (computational) work estimate is given by (see [32])
where d is the space dimension (in this paper \(d=1\) or \(d=2\)). The above error vs. work estimate is considerably more expensive when compared to the deterministic FVM error which scales as \((\mbox{Work})^{s/d+1}\). We see that in the situation of low order s of convergence and space dimension, a considerably reduced rate of convergence of the MCFVM, in terms of accuracy vs. work, is obtained. On the other hand, for high order schemes (i.e. \(s\gg d+1\)) the MC error dominates and we obtain the rate \(1/2\) in terms of work which is typical of MC methods.
5 Multi level Monte Carlo finite volume method
Given the slow convergence of MCFV, [32] and [36] proposed the MultiLevel Monte Carlo finite volume method (MLMCFV). The key idea behind MLMCFV is to simultaneously draw MC samples on a hierarchy of nested grids.
The algorithm consists of the following four steps:

1.
Nested meshes: Consider nested meshes \(\{\mathcal {T}_{l}\} _{l=0}^{\infty}\) of the spatial domain D with corresponding mesh diameters \(\Delta x_{l}\) that satisfy:
$$\Delta x_{l}= \sup \bigl\{ \operatorname{diam}(V_{i}) : V_{i} \in \mathcal {T}_{l} \bigr\} = \mathcal {O} \bigl(2^{l} \Delta x_{0} \bigr),\quad l\in\mathbb{N}_{0} $$where \(\Delta x_{0}\) is the mesh width for the coarsest resolution and corresponds to the lowest level \(l=0\).

2.
Sampling: For each level of resolution \(l\in\mathbb {N}_{0}\), we draw \(M_{l}\) independent identically distributed (i.i.d.) samples of \(\xi_{l}^{k}\), \(k = 1, 2,\ldots ,M_{l}\) belonging to the set of admissible parameters for the model.

3.
Solving: For each resolution level l and each realization \(\xi_{l}^{k}\), the underlying system (17) is solved using mesh \(\mathcal {T}_{l}\). Let the finite volume solutions be denoted by \(U_{\mathcal {T}_{l}}^{k,n}\) for the mesh \(\mathcal {T}_{l}\) and at the time level \(t_{n}\).

4.
Estimate solution statistics: Fix some positive integer \(L < \infty\) corresponding to the highest level. We estimate the expectation of the random solution field with the following estimator:
$$ E^{L} \bigl[U \bigl(\cdot,t^{n} \bigr) \bigr]:= E_{M_{0}} \bigl[w_{\mathcal {T}_{0}}^{n} \bigr] + \sum _{l=1}^{L} E_{M_{l}} \bigl[U_{\mathcal {T}_{l}}^{n}  U_{\mathcal {T}_{l1}}^{n} \bigr], $$(21)with \(E_{M_{l}}\) being the MC estimator
$$ E_{M_{l}} \bigl[U_{\mathcal {T}}^{n} \bigr]:= \frac{1}{M_{l}} \sum_{k=1}^{M_{l}} U_{\mathcal {T}}^{k,n} $$(22)for the level l. Higher statistical moments can be approximated analogously. (See [32].)
MLMCFV is nonintrusive as any standard FVM code can be used in step 3. Furthermore, MLMCFV is amenable to efficient parallelization as data from different grid resolutions and different samples only interacts in step 4.
Following the rigorous estimate for error in [32, 36], we consider
Here s again refers to the convergence rate of the deterministic finite volume scheme and \(C_{1,2,3}\) are constants depending only on the initial data, the parameters and the source term. From the error estimate (23), we obtain that the number of samples to equilibrate the statistical and spatiotemporal discretization errors in (21) is given by
Notice that the choice of \(M_{l}\) implies that the largest number of MC samples is required on the coarsest mesh level \(l=0\), whereas only a small fixed number of MC samples are needed on the finest discretization levels.
The corresponding error vs. work estimate for MLMCFV is given by (see [32, 36])
provided \(s < (d + 1)/2\). The above estimate shows that MLMCFV is more efficient than MCFV. Also, MLMCFV is (asymptotically) of the same complexity as a single deterministic FVM solve.
6 Numerical experiments
6.1 Submarine landslide over a flat bottom topography
As a first numerical experiment, we consider a 1D example where the computational domain is \(x \in[5,5]\) with transparent boundary conditions at both boundaries, and a flat bottom topography, specified by \(H(x) = 2\). The initial data for the problem is
and
Hence, our aim is to simulate a submarine landslide as the fluidized granular matter, denoted by the index 2 will slide under the water surface (denoted by index 1) and will initiate a flow of the free surface.
We consider in this example that the ratio of the densities of two layers r, the Coulomb angle \(\delta_{0}\) and the interlayer friction parameter \(c_{f}\) are uncertain. Here, we assume that these parameters are random variables that take values from a uniform distribution with following mean values,
Furthermore the variation is assumed to 40% over the mean for each of the three uniformly distributed uncertain parameters. Such a variability is fairly representative of the variability in experiments and observations.
We will perform UQ for the twolayer SavageHutter system with the above uncertain inputs using both a first and secondorder pathconservative scheme described in Section 3 whose viscosity matrix is the one associated to the IFCPPVM scheme described in [54] and both the Monte Carlo (MC) as well as Multilevel Monte Carlo methods, described in Sections 4 and 5, respectively, to discretize the probability space.
The resulting mean and mean ± standard deviation (statistical spread) are presented in Figures 1 and 2. In Figure 1, we present statistics of the heights of both layers at time \(t=0.3\) seconds using a firstorder scheme on a fine mesh of \(1\mbox{,}024\) cells. The scheme is combined with a MC simulation with \(M=1\mbox{,}024\) samples. Such a choice of sample number is based on the fact that the MC sample number should be chosen by the formula \(M=(\Delta x)^{2s}\) in (20). This reduces to \(M=N\), with N being the number of cells in the current simulation as the convergence rate is \(s=1/2\) for a firstorder scheme. Similarly, the MLMC method (together with the firstorder FVM scheme) is based on choosing \(L=6\) levels of mesh resolution, ranging from \(N_{0}=32\) cells up to \(N_{5}=1\mbox{,}024\) cells. We choose \(M_{5}=16\) samples for the highest level of resolution.
The statistics for the height of both layers simulated with the secondorder version of the IFCP scheme, together with MC and MLMC discretizations of the probability space are presented in Figure 2. Again, we choose a fine mesh resolution of \(1\mbox{,}024\) cells. \(M=512\) samples are used for the MCFVM method. As in the firstorder case, \(L=6\) levels of mesh resolution are used to specify the second order MLMCFVM method, ranging from \(N_{0}=32\) cells up to \(N_{5}=1\mbox{,}024\) cells. We choose \(M_{5}=16\) samples for the highest level of resolution.
Comparing the sets of methods, we observe that the secondorder IFCP method resolve the waves more sharply, even though the firstorder method is quite competitive. Furthermore, the MC and MLMC methods are fairly comparable at the same mesh resolution. In order to compare the methods quantitatively, we compute a reference solution on a fine mesh and with a large number of samples, and plot the error vs. resolution as well as error vs. runtime for both the mean and the variance of the outer layer height \(h_{1}\) and display the results in Figure 3. Note that the statistical errors are estimated by a procedure, first introduced in [32]. As shown in this figure, the secondorder IFCP method is superior in the amplitude of the error (for both mean and variance) than the firstorder method, when combined with both the MC and MLMC discretization of the probability space. On the other hand, the MC and MLMC methods (either combined with the firstorder or the secondorder IFCP scheme) are very similar when it comes to the amplitude of the error for the same mesh resolution. The main difference between the methods is discovered when the computational efficiency, measured in terms of error vs. runtime, is compared. As seen in Figure 3 (right column), the MLMC methods are approximately 60 to 80 times faster than the corresponding MC methods, for the same level of error. This almost two orders of magnitude gain in efficiency with the MLMC methods is instrumental in their utility for performing more realistic UQ simulations at an acceptable computational cost.
6.1.1 2D Lituya Bay megatsunami
On July 9, 1958, an 8.3 magnitude (rated on the Richter scale) earthquake, along the Fairweather fault, triggered a major subaerial landslide into the Gilbert Inlet at the head of Lituya Bay on the southern coast of Alaska (USA). The landslide impacted the water at a very high speed generating a giant tsunami with the highest recorded wave runup in history. The megatsunami runup was up to an elevation of 524 m and caused total destruction of the forest as well as erosion down to the bedrock on a spur ridge, along the slide axis. Many attempts have been made to understand and simulate this mega tsunami. The aim of this section is to produce a realistic, detailed and accurate simulation of the Lituya Bay mega tsunami of 1958, while taking into account uncertainties in critical parameters such as ratio of layer densities, interlayer friction and Coulomb friction angle. We use public domain topobathymetric data as well as the review paper [55] to approximate the Gilbert inlet topobathymetry.
We consider the system (3) discretized with a first order pathconservative scheme (13)(14) whose viscosity matrix \(Q_{ij}\) is defined using the IFCP scheme described in [54]. Friction terms are discretized semiimplicitly as in [6] For simplicity, we use the firstorder IFCP scheme. For fast computations, this scheme has been implemented on GPUs using CUDA. This twodimensional scheme and its GPU adaptation and implementation using single numerical precision are described in detail in [40]. The MLMCFV implementation has also been developed in CUDA, where all the updates of the means and variances have also been implemented using CUDA kernels.
A rectangular grid of 3,648 × 1,264 = 4,611,072 cells with a resolution of 4 m × 7.5 m has been designed in order to perform this simulation. We compute with the first order MLMCFVIFCP method with \(L = 4\) levels of resolution and \(M_{4} = 16\) samples for the highest level, which corresponds to the grid of 4,611,072 cells and constitutes the finest mesh. We allow a 30% of variability in the parameters \(c_{f}\), r and \(\delta_{0}\). The mean values of these parameters are \(c_{f} = 0.08\), \(r = 0.44\) and \(\delta_{0} = 13^{\circ}\). The CFL number is 0.9. Figures 4, 5, 6, 7 show the mean solution and variance for 39 s and 120 s.
The maximum runup is reached at 39 s. We can see in Figures 4 and 5 that the southern propagating part of the initial wave reaches a maximum mean height of 5060 m with a maximum standard deviation of 45 m.
While the initial wave moves through the main axis of Lituya Bay, a larger second wave appears as reflection of the first one from the south shoreline. (see Figures 6 and 7). These waves sweep both sides of the shoreline in their path. In the north shoreline, the wave reaches between 1520 m height while in the south shoreline the wave reaches mean values between 2030 m assuming a standard deviation of 1.21.5 m height.
The impact times, trimlines and the mean and variances obtained for the wave heights provided by the simulation are in good agreement with the majority of observations and conclusions described by [55]. See [56] for more details. Furthermore, we see that the computed standard deviation on account of the uncertain parameters is about 510% of the mean. Compared to the initial parameter uncertainty of 30% of mean, we see that the nonlinear evolution has damped the uncertainty and the problem is fairly insensitive or moderately sensitive to the three uncertain parameters. Thus, it enhances our confidence in previously reported numerical simulations of this event [56].
6.2 Earthquake generated tsunami at the Mediterranean Sea
We consider in this section an example of UQ in tsunamis generated by a earthquake. We suppose that the dip, strike and rake angle, and the slip parameter are uncertain. The computational domain is the Western Mediterranean and the epicenter of the earthquake is located in the Ionian Sea. We use the Okada’s model to determine the seafloor deformation and the initial condition for the shallowwater system (1). Uncertainty in input values for these parameters leads to uncertainty in the solution of the Okada’s model and the shallowwater system (1). Here we consider a second order discretization of system (1) over the sphere by means of a PVM pathconservative scheme in combination with a MUSCL type reconstruction operator. As in the Lituya Bay example, this model has been implemented on GPUs using CUDA as well as the MLMCFV method. A rectangular grid of 30 arcsec resolution has been used to perform the simulation with 7,019,520 cells and we use \(L=4\) level of resolution with \(M_{4}=64\) samples for the highest level. We consider the following variability on the parameters:

\(\mathit{slip}= 10 \pm2\mbox{ m}\).

\(\mathit{dip} = 35^{\circ}\pm 10^{\circ}\).

\(\mathit{strike} = 31 ^{\circ}\pm 10^{\circ}\).

\(\mathit{rake} = 90^{\circ}\pm 10^{\circ}\).
The CFL number is 0.5 and the friction coefficient is 0.03. Figures 8, 9, 10, 11 show the mean and variance of the free surface for \(t=2\mbox{ h}\) and \(t=4\mbox{ h}\), respectively. In this case we see that the computed standard deviation on account of the uncertain parameters is less than 0.1 m of the mean on the tsunami propagation. Maximum are located near the coastal areas and in the wave fronts, as expected. Nevertheless, as in the Lituya Bay example, the nonlinear evolution has damped the uncertainty.
Concerning the GPU time, the complete MLMCFV simulation takes about 1.5 hours of computing time, that is less of the half of the real time for providing UQ on such a big problem. This example shows that MLMCFV method is an attractive framework for UQ in real geophysical flows.
7 Conclusion
Many geophysical flows of interest, such as tsunamis generated by earthquakes, rockslides, avalanches, debris, etc. are modeled by shallowwater type systems. These models are characterized by physical parameters that are usually difficult to determine and that are prone to uncertainty. Consequently, the task of quantifying the resulting solution uncertainty (UQ) is of paramount importance in these simulations.
In this paper, we have presented a UQ paradigm by combining a pathconservative finite volume method that can accurately and robustly discretize the underlying nonconservative hyperbolic system (1) or (3) together with a Multilevel Monte Carlo statistical sampling algorithm. The algorithm is based on computing on nested sequences of mesh resolutions and estimating statistical quantities by combining results from different resolutions. The method is fully nonintrusive, easy to parallelize, fast and accurate. In particular, one can gain several orders of magnitude in computational efficiency vis a vis the standard Monte Carlo method.
We test the algorithms on a set of numerical examples on real bathymetries. The numerical results clearly indicate that the MLMCFVM framework can approximate statistics of quantities of interest such as runup heights, quite accurately and with reasonable computational cost under efficient implementation on GPUs. There was also qualitative agreement with experimental and observed data. Furthermore, the UQ simulations help in identifying the sensitivity of simulation outputs to the underlying uncertain parameters. Thus, they will enable an effective appraisal of sensitivity and enhance risk analysis and hazard mitigation.
The current paper illustrates the power and utility of the MLMC UQ paradigm for tsunami modeling.
References
Fritz DJ, Hager WH, Minor HE. Lituya Bay case: rockslide impact and wave runup. Sci Tsunami Hazards. 2001;19(1):322.
Fritz HM, Mohammed F, Yoo J. Lituya Bay landslide impact generated megatsunami 50th anniversary. Pure Appl Geophys. 2009;166(12):15375.
Okada Y. Surface deformation due to shear and tensile faults in a half space. Bull Seismol Soc Am. 1985;75:113554.
Yamazaki Y, Cheung KF, Pawlak G, Lay T. Surges along the Honolulu coast from the 2011 Tohoku tsunami. Geophys Res Lett. 2012;39(9):L09604.
Grilli ST, Harris JC, Bakhsh TST, Masterlark TL, Kyriakopoulos C, Kirby JT, Shi FY. Numerical simulation of the 2011 Tohoku tsunami based on a new transient FEM coseismic source: comparison to far and nearfield observations. Pure Appl Geophys. 2013;170(68):133359.
Fernández ED, Bouchut F, Bresch D, Castro MJ, Mangeney A. A new SavageHutter type models for submarine avalanches and generated tsunami. J Comput Phys. 2008;227:772054.
Dal Maso G, Lefloch P, Murat F. Definition and weak stability of nonconservative products. J Math Pures Appl. 1995;74:483548.
Parés C, Muñoz Ruíz ML. On some difficulties of the numerical approximation of nonconservative hyperbolic systems. Bol Soc Esp Mat Apl. 2009;47:2352.
Parés C. Numerical methods for nonconservative hyperbolic systems: a theoretical framework. SIAM J Numer Anal. 2006;44(1):30021.
Che S, Boyer M, Meng J, Tarjan D, Sheaffer JW, Skadron K. A performance study of generalpurpose applications on graphics processors using CUDA. J Parallel Distrib Comput. 2008;68:137080.
Owens JD, Houston M, Luebke D, Green S, Stone JE, Phillips JC. GPU computing. Proc IEEE. 2008;96:87999.
NVIDIA. NVIDIA developer zone. http://developer.nvidia.com/category/zone/cudazone.
Khronos OpenCL Working Group. The OpenCL specification. http://www.khronos.org/opencl.
Fang J, Varbanescu AL, Sips H. A comprehensive performance comparison of CUDA and OpenCL. In: 40th international conference on parallel processing (ICPP 2011). 2011. p. 21625.
Asunción M, Mantas JM, Castro MJ. Simulation of onelayer shallow water systems on multicore and CUDA architectures. J Supercomput. 2011;58:20614.
Brodtkorb AR, Sætra ML, Altinakar M. Efficient shallow water simulations on GPUs: implementation, visualization, verification, and validation. Comput Fluids. 2012;55:112.
Asunción M, Mantas JM, Castro MJ. Programming CUDAbased GPUs to simulate twolayer shallow water flows. In: EuroPar 2010  parallel processing. 2010. p. 35364. (Lecture notes in computer science; vol. 6272).
Castro MJ, Ortega S, Asunción M, Mantas JM, Gallardo JM. GPU computing for shallow water flow simulation based on finite volume schemes. C R, Méc. 2011;339:16584.
Message Passing Interface Forum: A Message Passing Interface Standard. University of Tennessee.
Acuña MA, Aoki T. Realtime tsunami simulation on a multinode GPU cluster [Poster]. In: ACM/IEEE conference on supercomputing 2009 (SC 2009). 2009.
Xian W, Takayuki A. MultiGPU performance of incompressible flow computation by lattice Boltzmann method on GPU cluster. Parallel Comput. 2011;9:52135.
Viñas M, Lobeiras J, Fraguela BB, Arenaz M, Amor M, García JA, Castro MJ, Doallo R. A multiGPU shallowwater simulation with transport of contaminants. Concurr Comput, Pract Exp. 2012;25:115369.
Asunción M, Mantas JM, Castro MJ, FernándezNieto ED. An MPICUDA implementation of an improved Roe method for twolayer shallow water systems. J Parallel Distrib Comput. 2012;72(9):106572.
Jacobsen DA, Senocak I. Multilevel parallelism for incompressible flow computations on GPU clusters. Parallel Comput. 2013;39:120.
Bijl H, Lucor D, Mishra S, Schwab C, editors. Uncertainty quantification in computational fluid dynamics. Heidelberg: Springer; 2014. (Lecture notes in computational science and engineering; vol. 92).
Chen QY, Gottlieb D, Hesthaven JS. Uncertainty analysis for the steadystate flows in a dual throat nozzle. J Comput Phys. 2005;204:37898.
Poette G, Després B, Lucor D. Uncertainty quantification for systems of conservation laws. J Comput Phys. 2009;228:244367.
Tryoen J, Le Maitre O, Ndjinga M, Ern A. Intrusive projection methods with upwinding for uncertain nonlinear hyperbolic systems. J Comput Phys. 2010;229(18):6485511.
Xiu D, Hesthaven JS. Highorder collocation methods for differential equations with random inputs. SIAM J Sci Comput. 2005;27:111839.
Zabaras N, Ma X. An adaptive hierarchical sparse grid collocation algorithm for the solution of stochastic differential equations. J Comput Phys. 2009;228:3084113.
Tokareva S. Stochastic finite volume methods for computational uncertainty quantification in hyperbolic conservation laws. Dissertation, Nr. 21498. Eidgenossische Technische Hochschule ETH Zurich; 2013.
Mishra S, Schwab C. Sparse tensor multilevel Monte Carlo finite volume methods for hyperbolic conservation laws with random initial data. Math Comput. 2012;280:19792018.
Giles M. Improved multilevel Monte Carlo convergence using the Milstein scheme. Preprint NA06/22. Oxford Computing Lab., Oxford, UK; 2006.
Giles M. Multilevel Monte Carlo path simulation. Oper Res. 2008;56:60717.
Heinrich S. Multilevel Monte Carlo methods. In: Largescale scientific computing. Berlin: Springer; 2001. p. 5867. (Lecture notes in computer science; vol. 2170).
Mishra S, Schwab C, Sukys J. Multilevel Monte Carlo finite volume methods for nonlinear systems of conservation laws in multidimension. J Comput Phys. 2012;231:336588.
Mishra S, Schwab C, Sukys J. Multilevel Monte Carlo finite volume methods for shallow water equations with uncertain topography in multidimensions. SIAM J Sci Comput. 2012;34(6):76184.
SánchezLinares C, Asunción M, Castro MJ, Mishra S, Sukys J. Multilevel Monte Carlo finite volume method for shallow water equations with uncertain parameters applied to landslidesgenerated tsunamis. Appl Math Model. 2015;39(2324):721126.
Bouchut F, MangeneyCastelnau A, Perthame B, Vilotte JP. A new model of SaintVenant and SavageHutter type for gravity driven shallow water flows. C R Math Acad Sci Paris. 2003;336(6):5316.
Asunción M, Mantas JM, Castro MJ, Ortega S. Scalable simulation of tsunamis generated by submarine landslides on GPU clusters. Accepted on Environ Model Softw. May 2016.
Volpert AI. Spaces BV and quasilinear equations. Math USSR Sb. 1967;73:255302.
Castro MJ, FernándezNieto ED, Morales de Luna T, NarbonaReina G, Parés C. A HLLC scheme for nonconservative hyperbolic problems. Application to turbidity currents with sediment transport. ESAIM: Math Model Numer Anal. 2013;47(1):132.
Castro MJ, FernándezNieto ED, Ferreiro AM, García Rodríguez JA, Parés C. High order extensions of Roe schemes for two dimensional nonconservative hyperbolic systems. J Sci Comput. 2009;39(1):67114.
Harten A, Lax PD, van Leer B. On upstream differencing and Godunov type schemes for hyperbolic conservation laws. SIAM Rev. 1983;25:3561.
Degond P, Peyrard PF, Russo G, Villedieu P. Polynomial upwind schemes for hyperbolic systems. C R Acad Sci Paris Ser I. 1999;328:47983.
Castro MJ, FernándezNieto ED. A class of computationally fast first order finite volume solvers: PVM methods. SIAM J Sci Comput. 2012;34:A2173A2196.
Castro MJ, Gallardo JM, Marquina A. A class of incomplete Riemann solvers based on uniform rational approximations to the absolute value function. J Sci Comput. 2014;60(2):36389.
Castro MJ, LeFloch PG, Muñoz ML, Parés C. Why many theories of shock waves are necessary: convergence error in formally pathconsistent schemes. J Comput Phys. 2008;3227:810729.
Hou TY, LeFloch PG. Why nonconservative schemes converge to wrong solutions: error analysis. Math Comput. 1994;62:497530.
Muñoz ML, Parés C. On the convergence and wellbalanced property of pathconservative numerical schemes for systems of balance laws. J Sci Comput. 2011;48:27495.
van Leer B. Towards the ultimate conservative difference scheme. V. A second order sequel to Godunov’s method. Comput Phys. 1979;32:10136.
Gottlieb S, Shu CW. Total variation diminishing RungeKutta schemes. Math Comput. 1998;67:7385.
Matsumoto M, Nishimura T. Mersenne twister: a 623dimensionally equidistributed uniform pseudorandom number generator. ACM Trans Model Comput Simul. 1998;8(1):330.
FernándezNieto ED, Castro MJ, Parés C. On an intermediate field capturing Riemann solver based on a parabolic viscosity matrix for the twolayer shallow water system. J Sci Comput. 2011;48(13):11740.
Miller DJ. Giant waves in Lituya Bay, Alaska: a timely account of the nature and possible causes of certain giant waves, with eyewitness reports of their destructive capacity. Professional paper; 1960.
Asunción M, Castro MJ, González JM, Macías J, Ortega S, Sánchez C. Modeling the Lituya Bay landslidegenerated megatsunami with a SavageHutter shallow water coupled model. NOAA Technical memorandum; 2013.
Acknowledgements
This research has been partially supported by the Spanish Government Research project MTM201238383C0201 and MTM201570490C21R, Andalusian Government Research projects P11FQM8179 and P11RNM7069 and the Swiss Federal Institute of Technology, ETH (Zurich). The work of SM was partially supported by the ERC STG NN. 306279, SPARCCLE.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
MA has implemented the finite volume solvers in GPU. CSL has implemented the MLMC algorithm. JM and JMGV has contributed on the design of the numerical tests of Lituya Bay and Earthquake generated tsunami at the Mediterranean Sea. MJC has designed the finite volume solver for the different models and has helped to draft the manuscript, and SM has helped to draft the manuscript. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
SánchezLinares, C., de la Asunción, M., Castro, M.J. et al. Uncertainty quantification in tsunami modeling using multilevel Monte Carlo finite volume method. J.Math.Industry 6, 5 (2016). https://doi.org/10.1186/s1336201600228
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1336201600228