 Research
 Open access
 Published:
Certified reduced basis approximation for parametrized partial differential equations and applications
Journal of Mathematics in Industry volume 1, Article number: 3 (2011)
Abstract
Reduction strategies, such as model order reduction (MOR) or reduced basis (RB) methods, in scientific computing may become crucial in applications of increasing complexity. In this paper we review the reduced basis methods (built upon a highfidelity ‘truth’ finite element approximation) for a rapid and reliable approximation of parametrized partial differential equations, and comment on their potential impact on applications of industrial interest. The essential ingredients of RB methodology are: a Galerkin projection onto a lowdimensional space of basis functions properly selected, an affine parametric dependence enabling to perform a competitive OfflineOnline splitting in the computational procedure, and a rigorous a posteriori error estimation used for both the basis selection and the certification of the solution. The combination of these three factors yields substantial computational savings which are at the basis of an efficient model order reduction, ideally suited for realtime simulation and manyquery contexts (for example, optimization, control or parameter identification). After a brief excursus on the methodology, we focus on linear elliptic and parabolic problems, discussing some extensions to more general classes of problems and several perspectives of the ongoing research. We present some results from applications dealing with heat and mass transfer, conductionconvection phenomena, and thermal treatments.
1 Introduction and motivation
Although the increasing computer power makes the numerical solution problems of very large dimensions that model complex phenomena essential, a computational reduction is still determinant whenever interested in realtime simulations and/or repeated output evaluations for different values of some inputs of interest. For a general introduction on the development of the reduced basis methods we refer to [1–3].
In this work we review the reduced basis (RB) approximation and a posteriori error estimation methods for the rapid and reliable evaluation of engineering outputs associated with elliptic and parabolic parametrized partial differential equations (PDEs). In particular, we consider a (say, single) output of interest s(\mathit{\mu})\in \mathbb{R} expressed as a functional of a field variable u(\mathit{\mu}) that is the solution of a partial differential equation, parametrized with respect to the input parameter pvector μ; the input parameter domain  that is, the set of all possible inputs  is a subset \mathcal{D} of {\mathbb{R}}^{p}. The inputparameter vector typically characterizes physical properties and material, geometrical configuration, or even boundary conditions and force fields or sources. The outputs of interest are physical quantities or indexes used to measure and assess the behavior of a system, that is, related to fields variables or fluxes, as for example, domain or boundary averages of the field variables, or other quantities such as energies, drag forces, flow rates, and so on. For the sake of simplicity, we consider throughout the paper the case of a linear output of a field variable, that is, s(\mathit{\mu})=l(u(\mathit{\mu})) for a suitable linear operator l(\cdot ). Finally, the field variablesu(\mathit{\mu}) that link the input parameters to the output depend on the selected PDE models and may represent temperature or concentration, displacements, potential functions, distribution functions, velocity or pressure. We thus arrive at an inputoutput relationship \mathit{\mu}\to s(\mathit{\mu}), whose evaluation requires the solution of a parametrized PDE.
The reduced basis methodology we recall in this paper is motivated by, and applied within two particular contexts: the realtime context (for example, inthefield robust parameterestimation, or nondestructive evaluation); and the manyquery context (for example, design or shape optimization, optimal control or multimodel/scale simulation). Both are crucial in view of more widespread application of numerical methods for PDEs in engineering practice and more specific industrial processes. They also feature a remarkable challenge to classical numerical techniques, such as  but not limited to  the finite element (FE) method; in fact, classical FE approximations may require big computational efforts (and also data/memory management) when the dimension \mathcal{N} of the discretisation space becomes large. This makes unaffordable both realtime and manyquery simulations: hence, looking also for computational efficiency in numerical methods becomes mandatory. The realtime and manyquery contexts are often much better served by a model reduction technique such as the reduced basis approximations and associated a posteriori error bound estimation revised in this work. We note, however, that the RB methods do not replace, but rather build upon and are measured  as regards accuracy  relative to, a finite element model: the reduced basis approximates not the exact solution but rather a ‘given’ finite element discretization of (typically) very large dimension \mathcal{N}, indicated as a highfidelity truth approximation. In short, we promote an algorithmic collaboration rather than a computational competition between RB and FE methods.
In this paper we shall focus on the case of linear functional outputs of affinely parametrized linear elliptic and parabolic coercive partial differential equations. This kind of problems  relatively simple, yet relevant to many important applications in transport (for example, steady/unsteady conduction, convectiondiffusion), mass transfer, and more generally in continuum mechanics  proves a convenient expository vehicle for the methodology, with the aim of stressing on the potential impact on possible industrial applications, dealing with optimization for devices and/or processes, diagnosis, control.
We provide here a short table of contents for the remainder of this review paper. For a wider framework on the position occupied by reduced basis method compared with other reduced order modelling (ROM) techniques and their current developments and trends, see [1]. After a brief historical excursus, we present in Section 2 the state of the art of the reduced basis method, presenting the essential components of this approach. We describe the affine linear elliptic and parabolic coercive settings in Section 3, discussing briefly admissible classes of piecewiseaffine geometry and coefficients. In Sections 4 and 5 we present the essential components of the reduced basis method: RB Galerkin projection and optimality; greedy sampling procedures; an OfflineOnline computational stratagem. In Section 6 we recall rigorous and relatively sharp a posteriori error bounds for RB approximations of field variables and outputs of interest. In Section 7 we briefly discuss several extensions of the methodology to more general and difficult classes of problems and applications, while in Section 8 we introduce three ‘working examples’ which shall serve to illustrate the RB formulation and its potential. In the last Section 9 we provide some future perspectives.
Although this paper focuses only on the affine linear elliptic and parabolic coercive cases  in order to allow to catch all the main ingredients  the reduced basis approximation and associated a posteriori error estimation methodology is much more general; nevertheless, many problems can successfully be faced in the even simplest affine case.
2 State of the art of the methodology
In this section we briefly review the current landscape starting from a brief historical excursus, introduce the essential RB ingredients and provide several references for further inquiry.
2.1 Computational opportunities and collaborations
The development of the reduced basis methodology can be viewed as a response to the issues described before, to address a significative computational reduction and improvement in computational performances. However, the parametric realtime and manyquery contexts represent also computational opportunities, since an important role in the RB paradigm and computational stratagem is played by the parametric setting. In particular:
(i) Our attention is restricted to a typically smooth and rather lowdimensional parametrically induced manifold \mathcal{M}, spanned by the set of fields engendered as the input varies over the parameter domain: for example, in the elliptic case
where X is a suitable functional space. Clearly, generic approximation spaces are unnecessarily rich and hence unnecessarily expensive within the parametric framework. Our approach is premised upon a classical finite element method ‘truth approximation’ space {X}^{\mathcal{N}}\subset X of (typically very large) dimension \mathcal{N}; the RB method consists in a loworder approximation of the ‘truth’ manifold {\mathcal{M}}^{\mathcal{N}} (see Figure 1) given by
Several classical RB proposals focus on the truth manifold {\mathcal{M}}^{\mathcal{N}}; much of what we present shall be relevant to any of these reduced basis spaces/approximations.
(ii) Under suitable assumptions, the parametric setting enables to decouple the computational effort in two stages: a very extensive (parameter independent) preprocessing performed Offline once that prepares the way for subsequent very inexpensive calculations performed Online for each new inputoutput evaluation required. In the realtime or manyquery contexts, where the goal is to achieve a very low marginal cost per inputoutput evaluation, we can accept an increased ‘Offline’ cost  not tolerable for a single or few evaluations  in exchange for greatly decreased ‘Online’ cost for each new/additional inputoutput evaluation.
2.2 A brief historical path
Reduced Basis discretization is, in brief, a Galerkin projection on an Ndimensional approximation space that focuses on the parametrically induced manifold {\mathcal{M}}^{\mathcal{N}}. We restrict the attention to the Lagrange reduced basis spaces, which are based on the use of ‘snapshot’ FE solutions of the PDEs, corresponding to certain (properly selected) parameter values, as global approximation basis functions previously computed and stored; other possible approaches, such as Taylor [4] or Hermite spaces [5], take into account also partial derivatives of these basis solutions.
Initial ideas grew out of two related research topics dealing with linear/nonlinear structural analysis in the late 70’s: the need for more effective manyquery design evaluation and more efficient parameter continuation methods [6–8]. The first work presented in these early somewhat domainspecific contexts were soon extended to (i) general finitedimensional systems as well as certain classes of ODEs/PDEs [9–12], and (ii) a variety of different reduced basis approximation spaces  in particular Taylor and Lagrange and more recently Hermite expansions. The next decade saw further expansion into different applications and classes of equations, such as fluid dynamics and, more specifically, the incompressible NavierStokes equations [13–16].
However, in these early methods, the approximation spaces tended to be rather local and typically lowdimensional in parameter (often a single physical parameter), due also to the absence of a posteriori error estimators and effective sampling procedures. It is clear that in higherdimensional parameter domains the ad hoc reduced basis predictions ‘far’ from any sample points can not necessarily be trusted, and hence a posteriori error estimators combined with efficient parametric space exploration techniques are crucial to guarantee reliability, accuracy and efficiency.
Much current effort in the last ten years in the RB framework has thus been devoted to the development of (i) a posteriori error estimation procedures  and in particular rigorous error bounds for outputs of interest  and (ii) effective sampling strategies, in particular for higher dimensional parameter domains [17, 18]. The a posteriori error bounds are of course mandatory for rigorous certification of any particular RB Online output prediction. Not only, an a priori theory for RB approximations is also available, dealing with a class of single parameter coercive problems [19] and more recently extended also to the multiparameter case [20].
However, the error estimators also play an important role in effective (greedy) sampling procedures [1, 18]: they allow us to explore efficiently the parameter domain in search of most representative ‘snapshots’, and to determine when we have just enough basis functions. We note here that greedy sampling methods are similar in objective to, but very different in approach from, more wellknown Proper Orthogonal Decomposition (POD) methods [21]; the former are usually applied in the (multidimensional) parameter domain, while the latter are most often applied in the (onedimensional) temporal domain. An efficient combination of the two techniques greedyPOD in parametertime has been proposed [22, 23] and is currently used for the treatment of parabolic problems [24]; see Section 5.2.
Concerning instead computational reduction and decoupling stratagems, early work on the RB method certainly exploited  but not fully  the OfflineOnline procedure. In particular, early RB approaches did not fully decouple the underlying FE approximation  with space of very high dimension \mathcal{N}  from the subsequent reduced basis projection and evaluation  of very low dimension N. Consequently, the computational savings provided by RB treatment (relative to classical FE evaluation) were typically rather modest [4, 7, 10]despite the very small size of the RB linear systems. Much work has thus been devoted to full decoupling of the FE and RB spaces through OfflineOnline procedures, above all concerning the efficient a posteriori error estimation: the complexity of the Offline stage depends on \mathcal{N}; the complexity of the Online stage  solution and/or output evaluation for a new value of μ  depends only on N and Q (used to measure the parametric complexity of the operator and data, as defined below). In this way, in the Online stage we can reach the accuracy of a highfidelity FE model but at the very low cost of a reducedorder model.
In the context of affine parameter dependence, in which the operator is expressible as the sum of Q products of parameterdependent functions and parameterindependent operators (see Section 3), the OfflineOnline idea is quite selfapparent and has been naturally exploited [16, 25] and extended more recently in order to obtain efficient a posteriori error estimation. In the case of nonaffine parameter dependence the development of OfflineOnline strategies is even more challenging and only in the last few years effective procedures have been studied and applied [26] to allow more complex parametrizations; clearly, OfflineOnline procedures are an important element both in the realtime and the manyquery contexts. We recall that also historically [9] RB methods have been built upon, and measured (as regards accuracy) relative to, underlying finite element discretizations. However, spectral element approaches [27, 28], finite volume [22], and other traditional discretization methods may be considered too.
2.3 Essential RB components
The essential components of the reduced basis method, which will be analyzed in detail along the next sections, can be summarized as below.

(i)
Rapidly convergent global reduced basis (RB) approximations  (Galerkin) projection onto a (Lagrange) space {X}_{N}^{\mathcal{N}} spanned by solution of the governing partial differential equation at N (optimally) selected points {S}_{N} in the parameter set \mathcal{D}. Typically, N will be small, as we focus attention on the (smooth) lowdimensional parametricallyinduced manifold of interest. The RB approximations to the field variable and output will be denoted {u}_{N}(\mathit{\mu}) and {s}_{N}(\mathit{\mu}), respectively.

(ii)
Rigorous a posteriori error estimation procedures that provide inexpensive yet sharp bounds for the error in the RB fieldvariable approximation, {u}_{N}(\mathit{\mu}), and output(s) approximation, {s}_{N}(\mathit{\mu}). Our error indicators are rigorous upper bounds for the error (relative to the FE truth field {u}^{\mathcal{N}}(\mathit{\mu}) and output {s}^{\mathcal{N}}(\mathit{\mu})=l({u}^{\mathcal{N}}(\mathit{\mu})) approximation, respectively) for all \mathit{\mu}\in \mathcal{D} and for all N. Error estimators are also employed during the greedy procedure [1] to construct optimal RB samples/spaces ensuring an efficient and wellconditioned RB approximation.

(iii)
Offline/Online computational procedures  decomposition stratagems which decouple the generation and projection stages of the RB approximation: very extensive (μindependent) preprocessing performed Offline once that prepares the way for subsequent inexpensive calculations performed Online for each new inputoutput evaluation required.
3 Elliptic & parabolic parametric PDEs
We introduce the formulation of affinely parametrized linear elliptic/parabolic coercive problems; the methodology addressed in this work is intended for heat and mass convection/conduction problems. For the sake of simplicity, we consider only compliant outputs, referring to Section 7 for the treatment of general (noncompliant) outputs and the extensions to other classes of equations.
3.1 Elliptic coercive parametric PDEs
We consider the following problem: Given \mathit{\mu}\in \mathcal{D}\subset {\mathbb{R}}^{p}, evaluate the output of interest
where u(\mathit{\mu})\in X(\Omega ) satisfies
Ω is a suitably regular bounded spatial domain in {\mathbb{R}}^{d} (for d=2 or 3), X=X(\Omega ) is a suitable Hilbert space; a(\cdot ,\cdot ;\mathit{\mu}) and f(\cdot ;\mathit{\mu}) are the bilinear and linear forms, respectively, associated with the PDE. We shall exclusively consider secondorder PDEs, and hence {({H}_{0}^{1}(\Omega ))}^{\nu}\subset X(\Omega )\subset {({H}^{1}(\Omega ))}^{\nu}, where \nu =1 (respectively, \nu =d) for a scalar (respectively, vector) field; here {L}^{2}(\Omega ) is the space of square integrable functions over Ω, {H}^{1}(\Omega )=\{vv\in {L}^{2}(\Omega ),\nabla v\in {({L}^{2}(\Omega ))}^{d}\}, {H}_{0}^{1}(\Omega )=\{v\in {H}^{1}(\Omega ):v{}_{\partial \Omega}=0\}. We denote by {(\cdot ,\cdot )}_{X} the inner product associated with the Hilbert space X, whose induced norm \parallel \cdot {\parallel}_{X}=\sqrt{{(\cdot ,\cdot )}_{X}} is equivalent to the usual {({H}^{1}(\Omega ))}^{\nu} norm. Similarly, (\cdot ,\cdot ) and \parallel \cdot \parallel denote the {L}^{2}(\Omega ) inner product and induced norm, respectively.
We shall assume that the bilinear form a(\cdot ,\cdot ;\mathit{\mu}):X\times X\to \mathbb{R} is continuous and coercive over X for all μ in \mathcal{D}, that is,
Finally, f(\cdot ) and \ell (\cdot ) are linear continuous functionals over X; we assume  solely for simplicity of exposition  that f and ℓ are independent of μ. Under these standard hypotheses on a and f, (3) admits a unique solution. For the sake of simplicity,^{1} we shall further presume for most of this paper that we are in ‘compliance’ case [1]. In particular, we assume that (i) a is symmetric  a(w,v;\mathit{\mu})=a(v,w;\mathit{\mu})\forall w,v\in X\forall \mathit{\mu}\in \mathcal{D}  and furthermore (ii) \ell =f. We shall make one last assumption, crucial to OfflineOnline procedures, by assuming that the parametric bilinear form a is ‘affine’ in the parameter μ: for some finite {Q}_{a}a(\cdot ,\cdot ;\mathit{\mu}) can be expressed as
for given smooth μdependent functions {\Theta}_{a}^{q}1\le q\le {Q}_{a}, and continuous μindependent bilinear forms {a}^{q}1\le q\le {Q}_{a} (in the compliant case the {a}^{q} are additionally symmetric). Under this assumption, {\mathcal{M}}^{\mathcal{N}} defined by (1) lies on a smooth pdimensional manifold in {X}^{\mathcal{N}}. In actual practice, f may also depend affinely on the parameter: in this case, f(v;\mathit{\mu}) may be expressed as a sum of {Q}^{f} products of μdependent functions and μindependent Xbounded linear forms. As we shall see in the following, the assumption of affine parameter dependence is broadly relevant to many instances of both property and geometry parametric variation. Nevertheless, this assumption may be relaxed [26], as detailed in Section 7.
3.2 Parabolic coercive parametric PDEs
We also consider the following parabolic model problem: Given \mathit{\mu}\in \mathcal{D}\subset {\mathbb{R}}^{p}, evaluate the output of interest
where u(\mathit{\mu})\in {C}^{0}(I;{L}^{2}(\Omega ))\cap {L}^{2}(I;X) is such that
subject to initial condition u(0;\mathit{\mu})={u}_{0}\in {L}^{2}(\Omega ); g(t)\in {L}^{2}(I) is called control function. In addition to the previous assumptions (4)(6), we shall assume that a(\cdot ,\cdot ;\mathit{\mu})  which represents convection and diffusion  is timeinvariant; moreover, m(\cdot ,\cdot ;\mathit{\mu})  which represents ‘mass’ or inertia  is assumed to be timeinvariant, symmetric, and continuous and coercive over {L}^{2}(\Omega ), with coercivity constant
Finally, we assume that also m(\cdot ,\cdot ;\mathit{\mu}) is ‘affine in parameter’, that is, it can be expressed as
for given smooth parameterdependent functions {\Theta}_{m}^{{q}^{\prime}}, 1\le {q}^{\prime}\le {Q}_{m}, and continuous parameterindependent bilinear forms {m}^{{q}^{\prime}}, 1\le {q}^{\prime}\le {Q}_{m}, for suitable integer {Q}_{m}.
3.3 Parametrized formulation
We now describe a general class  through not the most general one  of elliptic and parabolic problems which honors the hypotheses previously introduced; for simplicity we consider a scalar field (\nu =1) in two space dimension (d=2). We shall first define an ‘original’ problem (subscript o), posed over the parameterdependent domain {\Omega}_{o}={\Omega}_{o}(\mathit{\mu}); we denote {X}_{o}(\mathit{\mu}) a suitable Hilbert space defined on {\Omega}_{o}(\mathit{\mu}). In the elliptic case, the original problem reads as follows: Given \mathit{\mu}\in \mathcal{D}, evaluate
where {u}_{o}(\mathit{\mu})\in {X}_{o}(\mathit{\mu}) satisfies
In the same way, for the parabolic case we have: Given \mathit{\mu}\in \mathcal{D}, evaluate
being {u}_{o}(\mathit{\mu})\in {C}^{0}(I;{L}^{2}(\Omega ))\cap {L}^{2}(I;{X}_{o}(\mathit{\mu})) such that
The RB framework requires a reference (μindependent) domain Ω in order to compare, and combine, FE solutions that would be otherwise computed on different domains and grids. For this reason, we need to map {\Omega}_{o}(\mathit{\mu}) to a reference domain \Omega ={\Omega}_{o}({\mathit{\mu}}_{\mathrm{ref}}){\mathit{\mu}}_{\mathrm{ref}}\in \mathcal{D}, in order to get the ‘transformed’ problem (2)(3) or (7)(8)  which is the point of departure of RB approach  for elliptic and parabolic case, respectively. The reference domain Ω is thus related to the original domain {\Omega}_{o}(\mathit{\mu}) through a parametric mapping T(\cdot ;\mathit{\mu}), such that {\Omega}_{o}(\mathit{\mu})=T(\Omega ;\mathit{\mu}). It remains to place some restrictions on both the geometry (that is, on {\Omega}_{o}(\mathit{\mu})) and the operators (that is, {a}_{o}{m}_{o}{f}_{o}{l}_{o}) such that (upon mapping) the transformed problem satisfies the hypotheses introduced above  in particular, the affinity assumption (6), (10). To this aim, a domain decomposition is useful [1].
We first consider the class of admissible geometries. In order to build a parametric mapping related to geometrical properties, we introduce a conforming domain decomposition of {\Omega}_{o}(\mathit{\mu}),
consisting of mutually nonoverlapping open subdomains {\Omega}_{o}^{l}(\mathit{\mu}), s.t. {\Omega}_{o}^{l}(\mathit{\mu})\cap {\Omega}_{o}^{{l}^{\prime}}(\mathit{\mu})=\varnothing, 1\le l<{l}^{\prime}\le {L}_{\mathrm{dom}}. If related to geometrical properties used as input parameters (for example, lengths, thicknesses, diameters or angles) the definition of parametric mappings can be done in a quite intuitive fashion.^{2} In the following we will identify {\Omega}^{l}={\Omega}_{o}^{l}({\mathit{\mu}}_{\mathrm{ref}}), 1\le l\le {L}_{\mathrm{dom}}, and denote (11) the ‘RB triangulation’; it will play an important role in the generation of the affine representation (6), (10). Hence, original and reference subdomains must be linked via a mapping T(\cdot ;\mathit{\mu}):{\Omega}^{l}\to {\Omega}_{o}^{l}(\mathit{\mu}), 1\le l\le {L}_{\mathrm{dom}}, such that
these maps must be individually bijective, collectively continuous, and such that {T}^{l}(\mathbf{x},\mathit{\mu})={T}^{{l}^{\prime}}(\mathbf{x};\mathit{\mu}), \forall \mathbf{x}\in {\Omega}^{l}\cap {\Omega}^{{l}^{\prime}}, for 1\le l<{l}^{\prime}\le {L}_{\mathrm{dom}}.
Here we consider the affine case, where the transformation is given, for any \mathit{\mu}\in \mathcal{D} and \mathbf{x}\in {\Omega}^{l}, by
for given translation vectors {\mathbf{C}}^{l}:\mathcal{D}\to {\mathbb{R}}^{d} and linear transformation matrices {\mathbf{G}}^{l}:\mathcal{D}\to {\mathbb{R}}^{d\times d}. The linear transformation matrices can effect rotation, scaling and/or shear and have to be invertible. The associated Jacobians can be defined as {J}^{l}(\mathit{\mu})=det({\mathbf{G}}^{l}(\mathit{\mu})), 1\le l\le {L}_{\mathrm{dom}}.
We next introduce the class of admissible operators. We may consider the associated bilinear forms
where {\mathbf{K}}_{o,l}:\mathcal{D}\to {\mathbb{R}}^{3\times 3}, 1\le l\le {L}_{\mathrm{dom}}, are prescribed coefficients.^{3} In the parabolic case, we also may consider
where {\mathbf{M}}_{o,l}:\mathcal{D}\to \mathbb{R} represents the identity operator. Similarly, we require that {f}_{o}(\cdot ) and {l}_{o}(\cdot ) are written as
where {F}_{o,l}:\mathcal{D}\to \mathbb{R} and {L}_{o,l}:\mathcal{D}\to \mathbb{R}, for 1\le l\le {L}_{\mathrm{dom}}, are prescribed coefficients. By identifying u(\mathit{\mu})={u}_{o}(\mathit{\mu})\circ T(\cdot ;\mathit{\mu}) in the elliptic case (resp. u(t;\mathit{\mu})={u}_{o}(t;\mathit{\mu})\circ T(\cdot ;\mathit{\mu})\forall t>0 in the parabolic case), and tracing (14) back on the reference domain Ω by the mapping T(\cdot ;\mathit{\mu}), it follows that the transformed bilinear form a(\cdot ,\cdot ;\mathit{\mu}) can be expressed as
where {\mathbf{K}}_{l}:\mathcal{D}\to {\mathbb{R}}^{3\times 3}, 1\le l\le {L}_{\mathrm{dom}}, is a parametrized tensor given by
and {\mathbf{G}}^{l}:\mathcal{D}\to {\mathbb{R}}^{3\times 3} is given by
In the same way, the transformed bilinear form m(\cdot ,\cdot ;\mathit{\mu}) can be expressed as
where {\mathbf{M}}_{l}:\mathcal{D}\to \mathbb{R}, 1\le l\le {L}_{\mathrm{dom}}, {\mathbf{M}}_{l}(\mathit{\mu})={J}^{l}(\mathit{\mu}){\mathbf{M}}_{o,l}(\mathit{\mu}). The transformed linear forms can be expressed similarly as
where {F}_{l}:\mathcal{D}\to \mathbb{R} and {L}_{l}:\mathcal{D}\to \mathbb{R} are given by {F}_{l}(\mathit{\mu})={J}^{l}(\mathit{\mu}){M}_{o,l}(\mathit{\mu}), {L}_{l}={J}^{l}(\mathit{\mu}){L}_{o,l}(\mathit{\mu}), for 1\le l\le {L}_{\mathrm{dom}}. Hence, the original problem has been reformulated on a reference configuration, resulting in a parametrized problem where the effect of geometry variations is traced back onto its parametrized transformation tensors. The affine formulation (6) (resp. (6) and (10)) can then be derived by simply expanding the expression (16) (and (17)) in terms of the subdomains {\Omega}^{l} and the different entries of {K}_{ij}^{l}. This results, for example, in
The affine representation is now clear: for each term in (18) the (parameterindependent) integral represents {a}^{q}(w,v), while the (parameterdependent) prefactor represents {\Theta}^{q}(\mathit{\mu}); the bilinear form m admits a similar treatment. The process by which we map this original problem to the transformed problem can be largely automated [1]. There are many ways in which we can relax the given assumptions and thus treat an even broader class of problems; for example, we may consider ‘elliptical’ or ‘curvy’ triangular subdomains [1]; we may consider nontimeinvariant bilinear forms a and m; we may consider coefficient functions K M which are polynomial in the spatial coordinate (or more generally approximated by the Empirical Interpolation Method [26]). Some generalizations will be addressed in Section 7 and can be pursued by modification of the method presented in Section 4: in general, increased complexity in geometry and operator will result in more terms in affine expansions  larger  with a corresponding increase in the reduced basis (Online) computational costs.
4 The reduced basis method
We discuss in this section all the details related to the construction of the reduced basis approximation in both the elliptic and the parabolic case, for rapid and reliable prediction of engineering outputs associated with parametrized PDEs.
4.1 Elliptic case
We assume that we are given a FE approximation space {X}^{\mathcal{N}} of (typically very large) dimension \mathcal{N}. Hence, the FE discretization of problem (2)(3) [29, 30] is as follows: given \mathit{\mu}\in \mathcal{D}, evaluate
where {u}^{\mathcal{N}}(\mathit{\mu})\in {X}^{\mathcal{N}} satisfies
We then introduce, given a positive integer {N}_{\mathrm{max}}, an associated sequence of (what shall ultimately be reduced basis) approximation spaces: for N=1,\dots ,{N}_{\mathrm{max}}{X}_{N}^{\mathcal{N}} is a Ndimensional subspace of {X}^{\mathcal{N}}; we further suppose that they are nested (or hierarchical), that is, {X}_{1}^{\mathcal{N}}\subset {X}_{2}^{\mathcal{N}}\subset \cdots \subset {X}_{{N}_{\mathrm{max}}}^{\mathcal{N}}\subset {X}^{\mathcal{N}}; this condition is fundamental in ensuring (memory) efficiency of the resulting RB approximation. We recall from Section 2 that there are several classical RB proposals  Taylor, Lagrange, and Hermite spaces  as well as many different approaches, such as POD spaces. Even if we focus on Lagrange RB spaces, much of what is presented in this paper  in particular, concerning the discrete formulation, OfflineOnline procedures and a posteriori error estimation  shall be relevant to any of these RB spaces/approximations, even if they are not of immediate application in industrial problems (where we want to preserve the OfflineOnline procedure and hierarchical spaces).
In order to define a (hierarchical) sequence of Lagrange spaces {X}_{N}^{\mathcal{N}}, 1\le N\le {N}_{\mathrm{max}}, we first introduce a ‘master set’ of properly selected parameter points {\mathit{\mu}}^{n}\in \mathcal{D}, 1\le n\le {N}_{\mathrm{max}}. We then define, for given N\in \{1,\dots ,{N}_{\mathrm{max}}\}, the Lagrange parameter samples
and associated Lagrange RB spaces
the {u}^{\mathcal{N}}({\mathit{\mu}}^{n}), 1\le n\le {N}_{\mathrm{max}}, are often referred to as ‘(retained) snapshots’ of the parametric manifold {\mathcal{M}}^{\mathcal{N}} and are obtained by solving the FE problem (19) for {\mathit{\mu}}^{n}, 1\le n\le {N}_{\mathrm{max}}. It is clear that, if indeed the manifold is lowdimensional and smooth, then we would expect to well approximate any member of the manifold  any solution {u}^{\mathcal{N}}(\mathit{\mu}) for some μ in \mathcal{D}  in terms of relatively few retained snapshots. However, we must ensure that we can choose a good combination of the available retained snapshots; represent the retained snapshots in a stable RB basis, efficiently obtain the associated RB basis coefficients; and finally choose the retained snapshots (that is, the sample {S}_{{N}_{\mathrm{max}}}) in an optimal way. The sampling strategy used to build the set {S}_{N} will be discussed in Section 5.
4.1.1 Galerkin projection
For our particular class of equations, Galerkin projection is arguably the best approach. Given \mathit{\mu}\in \mathcal{D}, evaluate (recalling the compliance assumption)
where {u}_{N}^{\mathcal{N}}(\mathit{\mu})\in {X}_{N}^{\mathcal{N}}\subset {X}^{\mathcal{N}} (or more precisely, {u}_{{X}_{N}^{\mathcal{N}}}^{\mathcal{N}}(\mathit{\mu})\in {X}_{N}^{\mathcal{N}}) satisfies
We immediately obtain the classical optimality result in the energy norm:^{4}
in the energy norm, the Galerkin procedure automatically selects the best combination of snapshots; moreover, we have that
that is, the output converges as the ‘square’ of the energy error. Although this latter result depends critically on the compliance assumption, extension via adjoint approximations to the noncompliant case is possible; we discuss this further in Section 7.
We now consider the discrete equations associated with the Galerkin approximation (23). First of all, we apply the GramSchmidt process with respect to the {(\cdot ,\cdot )}_{X} inner product to snapshots {u}^{\mathcal{N}}({\mathit{\mu}}^{n}), 1\le n\le {N}_{\mathrm{max}}, to obtain mutually {(\cdot ,\cdot )}_{X}orthonormal basis functions {\zeta}_{n}^{\mathcal{N}}, 1\le n\le {N}_{\mathrm{max}}. Then, the RB solution can be expressed as:
by taking v={\zeta}_{n}^{\mathcal{N}}, 1\le n\le N, into (23) and using (26), we obtain the RB ‘stiffness’ equations
for the RB coefficients {u}_{Nm}^{\mathcal{N}}(\mathit{\mu}), 1\le m,n\le N; we can subsequently evaluate the RB output as
4.1.2 OfflineOnline procedure
The system (27) is nominally of small size: a set of N linear algebraic equations in N unknowns. However, the formation of the stiffness matrix, and indeed the load vector, involves entities {\zeta}_{n}^{\mathcal{N}}, 1\le n\le N, associated with our \mathcal{N}dimensional FE approximation space. Fortunately, we can appeal to affine parameter dependence to construct very efficient OfflineOnline procedures. In particular, system (27) can be expressed, thanks to (6), as
for 1\le n\le N. The equivalent matrix form is
where {({\mathbf{u}}_{N}(\mathit{\mu}))}_{m}={u}_{Nm}^{\mathcal{N}}(\mathit{\mu}) and
for 1\le m,n\le {N}_{\mathrm{max}}. Since each basis function {\zeta}_{n}^{\mathcal{N}} belongs to the FE space {X}^{\mathcal{N}}, they can be written as
that is, as a linear combination of the FE basis functions {\{{\varphi}_{i}\}}_{i=1}^{\mathcal{N}}; therefore, the RB ‘stiffness’ matrix can be assembled once the corresponding the RB ‘stiffness’ matrix can be assembled. Then, by denoting
we have that
being
the structures given by the FE discretization. In this way, computation entails an expensive μindependent Offline stage performed only once and an Online stage for any chosen parameter value \mathit{\mu}\in \mathcal{D}. During the former the FE structures {\{{\mathcal{A}}_{\mathcal{N}}^{q}\}}_{q=1}^{{Q}_{a}} and {\mathcal{F}}_{\mathcal{N}}, as well as the snapshots {\{{u}^{\mathcal{N}}({\mathit{\mu}}^{n})\}}_{n=1}^{{N}_{\mathrm{max}}} and the corresponding orthonormal basis {\{{\zeta}_{n}^{\mathcal{N}}\}}_{n=1}^{{N}_{\mathrm{max}}}, are computed and stored. In the latter, for any given μ, all the {\Theta}_{q}^{a}(\mathit{\mu}) coefficients are evaluated, and the N\times N linear system (29) is assembled and solved, in order to get the RB approximation {u}_{N}^{\mathcal{N}}(\mathit{\mu}). Then, the RB output approximation is obtained through the simple scalar product (37). Although being dense (rather than sparse as in the FE case), the system matrix is very small, with a size independent of the FE space dimension \mathcal{N}.
The Online operation count is O(Q{N}^{2}) to get and O({N}^{3}) to invert the matrix in (29), and finally O(N) to effect the inner product (37). The Online storage is  thanks to the hierarchy assumption  only O(Q{N}_{\mathrm{max}}^{2})+O({N}_{\mathrm{max}}): for any given N, we may extract the necessary RB N\times N matrices (respectively, Nvectors) as principal submatrices (respectively, principal subvectors) of the corresponding {N}_{\mathrm{max}}\times {N}_{\mathrm{max}} (respectively, {N}_{\mathrm{max}}) quantities. The Online (marginal) cost (operation count and storage) to evaluate \mathit{\mu}\to {s}_{N}^{\mathcal{N}}(\mathit{\mu}) is thus independent of \mathcal{N} (see Figure 2).
4.2 Parabolic case
We next introduce the finite difference in time and finite element (FE) in space discretization [29, 30] of the parabolic problem (8). We first divide the time interval I into K subintervals of equal length \Delta t={t}_{f}/K and define {t}^{k}=k\Delta t0\le k\le K, and define the FE approximation space {X}^{\mathcal{N}}. Hence, given \mathit{\mu}\in \mathcal{D}, we look for {u}^{\mathcal{N}k}(\mathit{\mu})\in X0\le k\le K, such that
subject to initial condition ({u}^{\mathcal{N}0},v)=({u}_{0},v)\forall v\in {X}^{\mathcal{N}}. We then evaluate the output (recalling the compliance assumption): for 0\le k\le K
We shall sometimes denote {u}^{\mathcal{N}k}(\mathit{\mu}) as {u}^{\mathcal{N}}({t}^{k};\mathit{\mu}) and {s}^{\mathcal{N}k}(\mathit{\mu}) as {s}^{\mathcal{N}}({t}^{k};\mathit{\mu}) to more clearly identify the discrete time levels. Under the coercivity assumption (9) of the bilinear form a(\cdot ,\cdot ;\mathit{\mu}) and the smoothness assumption of {\Theta}_{a,m}^{q}(\mathit{\mu}) coefficients,
the analogous entity of (32) in the parabolic case, lies on a smooth (p+1)dimensional manifold in {X}^{\mathcal{N}}.
Equation (30)  Backward EulerGalerkin discretization of (8)  shall be our point of departure: we shall presume that Δt is sufficiently small and \mathcal{N} is sufficiently large such that {u}^{\mathcal{N}}({t}^{k};\mathit{\mu}) and {s}^{\mathcal{N}}({t}^{k};\mathit{\mu}) are effectively indistinguishable from u({t}^{k};\mathit{\mu}) and s({t}^{k};\mathit{\mu}), respectively. The development readily extends to CrankNicholson or higher order discretization; for purposes of exposition, we consider the simple Backward Euler approach.
The RB approximation in this case [24, 31] is based on RB spaces {X}_{N}^{\mathcal{N}}1\le N\le {N}_{\mathrm{max}}, generated by a sampling procedure which combines spatial snapshots in time and parameter  {u}^{\mathcal{N}k}(\mathit{\mu})  in an optimal fashion (see Section 5). Given \mathit{\mu}\in \mathcal{D}, we now look for {u}_{N}^{k}(\mathit{\mu})\in {X}_{N}^{\mathcal{N}}0\le k\le K, such that
subject to ({u}_{N}^{0}(\mathit{\mu}),v)=({u}^{\mathcal{N}0},v)\forall v\in {X}_{N}^{\mathcal{N}}. We then evaluate the associated output: for 0\le k\le K
We shall sometimes denote {u}_{N}^{k}(\mathit{\mu}) as {u}_{N}({t}^{k};\mathit{\mu}) and {s}_{N}^{k}(\mathit{\mu}) as {s}_{N}({t}^{k};\mathit{\mu}) to more clearly identify the discrete time levels. (Note that all the RB quantities should bear a \mathcal{N}  {X}_{N}^{\mathcal{N}}{u}_{N}^{\mathcal{N}k}(\mathit{\mu}){s}_{N}^{\mathcal{N}k}(\mathit{\mu})  since the RB approximation is defined in terms of the truth discretization; however, for clarity of exposition, we shall typically suppress this superscript.)
We now develop the algebraic equations associated with (33)(34). First of all, the RB approximation {u}_{N}^{k}(\mathit{\mu})\in {X}_{N}^{\mathcal{N}} shall be expressed as
given a set of mutually {(\cdot ,\cdot )}_{X} orthogonal basis functions {\zeta}_{n}^{\mathcal{N}}\in {X}^{\mathcal{N}}, 1\le n\le {N}_{\mathrm{max}}, and corresponding (hierarchical) RB spaces
By taking v={\zeta}_{n}^{\mathcal{N}}, 1\le n\le N, into (33) and using (35), we obtain:
for the RB coefficients {u}_{Nm}^{\mathcal{N}}(\mathit{\mu}), 1\le m,n\le N; we can subsequently evaluate the RB output as
The equivalent matrix form is
where {({\mathbf{u}}_{N}^{k}(\mathit{\mu}))}_{m}={u}_{Nm}^{k}(\mathit{\mu}) and
other terms are the same as in the elliptic case (see Sections 4.1.14.1.2). Moreover, also the RB mass terms can be computed from the FE mass terms as
being {\{{\varphi}_{i}\}}_{i=1}^{\mathcal{N}} the basis of the FE space {X}^{\mathcal{N}}.
The OfflineOnline procedure is now straightforward; in particular, the unsteady case is very similar to the steady case discussed before. There are a few new twists: as regards storage, we must now append to the elliptic Offline dataset an affine development for the mass matrix {\mathbf{M}}_{N}^{q}, 1\le q\le {Q}_{m}, associated with the unsteady term; as regards computational complexity, we must multiply the elliptic operation counts by K to arrive at O(K{N}^{3}) (in fact, O(K{N}^{2}) for a linear timeinvariant system) for the Online operation count, where K is the number of time steps (recall that in actual practice the ‘truth’ is discrete in time). Thus, the Online evaluation of {s}_{N}(\mathit{\mu}) remains independent of \mathcal{N} even in the unsteady case.
5 Sampling strategies
We now review two sampling strategies used for the construction of RB spaces: a greedy procedure for the elliptic case and a combined PODgreedy procedure for the parabolic case. Let us denote by Ξ a finite sample of points in \mathcal{D}, which shall serve as surrogates for \mathcal{D} in the calculation of errors (and error bounds) over the parameter domain.
5.1 Elliptic case
We denote the particular samples which shall serve to select the RB space  or ‘train’ the RB approximation  by {\Xi}_{\mathrm{train}}. The cardinality of {\Xi}_{\mathrm{train}} will be denoted {\Xi}_{\mathrm{train}}={n}_{\mathrm{train}}. We note that although the ‘test’ samples Ξ serve primarily to understand and assess the quality of the RB approximation and a posteriori error estimators, the ‘train’ samples {\Xi}_{\mathrm{train}} serve to generate the RB approximation. The choice of {n}_{\mathrm{train}} and {\Xi}_{\mathrm{train}} thus have important Offline and Online computational implications. Moreover, let us denote {\epsilon}_{\mathrm{tol}}^{\ast} a chosen tolerance for the stopping criterium of the greedy algorithm.
The greedy sampling strategy can be implemented as follows:
As we shall describe in detail in Section 6, {\Delta}_{N}(\mathit{\mu}) is a sharp, (asymptotically) inexpensive a posteriori error bound for \parallel {u}^{\mathcal{N}}(\mathit{\mu}){u}_{{X}_{N}^{\mathcal{N}}}^{\mathcal{N}}(\mathit{\mu}){\parallel}_{X}.
Roughly, at iteration N the greedy algorithm appends to the retained snapshots that particular candidate snapshot  over all candidate snapshots {u}^{\mathcal{N}}(\mathit{\mu})\mathit{\mu}\in {\Xi}_{\mathrm{train}}  which is (predicted^{5} by the a posteriori error bound to be the) least well approximated by (the RB prediction associated to) {X}_{N1}^{\mathcal{N}}. We refer to [32] for a general analysis of the greedy algorithm and related convergence rates.
5.2 Parabolic case
The temporal evolution case is quite different: the greedy approach [31] can encounter difficulties best treated by incorporating elements of the POD selection process [22]. Our sampling method thus combine the POD in {t}^{k}  to capture the causality associated with the evolution equation  with the greedy procedure in μ [1, 18, 31]  to treat efficiently the higher dimensions and more extensive ranges of parameter variation.
To begin, we summarize the basic POD optimality property: given J elements {w}_{j}\in {X}^{\mathcal{N}}, 1\le j\le J, POD(\{{w}_{1},\dots ,{w}_{J}\},M) returns M<J{(\cdot ,\cdot )}_{X}orthonormal functions \{{\chi}_{m},1\le m\le M\} such that the space {\mathcal{P}}_{M}=span\{{\chi}_{m},1\le m\le M\} is optimal, that is,
where {Y}_{M} denotes an Mdimensional linear space.
To initiate the PODgreedy sampling procedure we must specify {\Xi}_{\mathrm{train}}, an initial sample {S}^{\ast}=\{{\mathit{\mu}}_{0}^{\ast}\} and a tolerance {\epsilon}_{\mathrm{tol}}^{\ast}. The algorithm depends on two suitable integers {M}_{1} and {M}_{2} (the criterium behind their setting is addressed later) and reads as follows:
As we shall describe in detail in Section 6, {\Delta}_{N}({t}^{k};\mathit{\mu}) provides a sharp inexpensive a posteriori error bound for \parallel {u}^{\mathcal{N}}({t}^{k};\mathit{\mu}){u}_{N}^{\mathcal{N}}({t}^{k};\mathit{\mu}){\parallel}_{X}. In practice, we exit the PODgreedy sampling procedure at N={N}_{\mathrm{max}}\le {N}_{\mathrm{max},0} for which a prescribed error tolerance is satisfied: to wit, we define
and terminate when {\epsilon}_{N,\mathrm{max}}^{\ast}\le {\epsilon}_{\mathrm{tol}}^{\ast}. Note, by virtue of the final re–definition, the PODgreedy generates hierarchical spaces {X}_{N}, 1\le N\le {N}_{\mathrm{max}}, which is computationally very advantageous.
We choose {M}_{1} to satisfy an internal POD error criterion based on the usual sum of eigenvalues and {\epsilon}_{\mathrm{tol}}^{\ast}; we choose {M}_{2}\le {M}_{1} to minimize duplication in the RB space. It is important to note that the PODgreedy method readily accommodates a repeat {\mathit{\mu}}^{\ast} in successive greedy cycles  new information will always be available and old information rejected; in contrast, a pure greedy approach in both t and μ[31], though often generating good spaces, can ‘stall’. Furthermore, since the POD is conducted in only one (time) dimension  with the greedy addressing the remaining (parameter) dimensions  the procedure remains computationally feasible even for large parameter domains and very extensive parameter train samples (and in particular in higher parameter dimensions).
Concerning the computational aspects, the crucial point is that the operation count for the PODgreedy algorithm is additive and not multiplicative in {n}_{\mathrm{train}} and \mathcal{N}; in contrast, in a pure POD approach, we would need to evaluate the FE ‘truth’ solution at the {n}_{\mathrm{train}} candidate parameter values. As a result, in the PODgreedy approach we can take {n}_{\mathrm{train}} relatively large: we can thus anticipate RB spaces and approximations that provide rapid convergence uniformly over the parameter domain.
6 A posteriori error estimation
Effective a posteriori error bounds for field variables and outputs of interest are crucial for both the efficiency and the reliability of RB approximations. As regards efficiency, a posteriori error estimation permits us to (inexpensively) control the error, as well as to minimize the computational effort by controlling the dimension of the RB space. Not only, in the greedy algorithm the application of error bounds (as surrogates for the actual error) allows significantly larger training samples {\Xi}_{\mathrm{train}}\subset \mathcal{D} and a better parameter space exploration at greatly reduced Offline computational cost. Concerning reliability, a posteriori error bounds allows a confident exploitation of the rapid predictive power of the RB approximation. By means of an efficient a posteriori error bound, we can make up for an error quantification for each new parameter value μ in the online stage and thus can make sure that feasibility (and safety/failure) conditions are verified.
The motivations for error estimation in turn place requirements on the error bounds. First, the error bounds must be rigorous  valid for all N and for all parameter values in the parameter domain \mathcal{D}: nonrigorous error ‘indicators’ may suffice for adaptivity during basis assembling, but not for reliability. Second, the bounds must be reasonably sharp: an overly conservative error bound can yield inefficient approximations (N too large) or even dangerous suboptimal engineering results (unnecessary safety margins). And third, the bounds must be very efficient: the Online operation count and storage to compute the RB error bounds  the marginal average cost  must be independent of \mathcal{N} (and commensurate with the cost associated with the RB output prediction).
6.1 Elliptic case
Let us now consider a posteriori error bounds for the field variable {u}_{N}^{\mathcal{N}}(\mathit{\mu}) and the output {s}_{N}^{\mathcal{N}}(\mathit{\mu}) in the elliptic case (22)(23). We introduce two basic ingredients of our error bounds: the error residual relationship and coercivity lower bounds.
6.1.1 Basic ingredients
The central equation in a posteriori theory is the error residual relationship. In particular, it follows from the problem statements for {u}^{\mathcal{N}}(\mathit{\mu}), (19), and {u}_{N}^{\mathcal{N}}(\mathit{\mu}), (23), that the error e(\mathit{\mu}):={u}^{\mathcal{N}}(\mathit{\mu}){u}_{N}^{\mathcal{N}}(\mathit{\mu})\in {X}^{\mathcal{N}} satisfies
Here r(v;\mathit{\mu})\in {({X}^{\mathcal{N}})}^{\prime} (the dual space to {X}^{\mathcal{N}}) is the residual,
Indeed, (39) directly follows from the definition (40), f(v;\mathit{\mu})=a({u}^{\mathcal{N}}(\mathit{\mu}),v;\mathit{\mu}), \forall v\in {X}^{\mathcal{N}}, bilinearity of a, and the definition of e(\mathit{\mu}). It shall prove convenient to introduce the Riesz representation of r(v;\mathit{\mu}): \stackrel{\u02c6}{e}(\mathit{\mu})\in {X}^{\mathcal{N}} satisfies
This allows us to write the error residual equation (39) as
and it follows that the dual norm of the residual can be evaluated through the Riesz representation:
this shall prove to be important for the OfflineOnline stratagem developed in Section 6.1.3 below.
As a second ingredient, we need a positive, parametric lower bound function {\alpha}_{\mathrm{LB}}^{\mathcal{N}}(\mathit{\mu}) for {\alpha}^{\mathcal{N}}(\mathit{\mu}), the FE coercivity constant^{6} defined as
hence, we introduce
where the online computational time to evaluate \mathit{\mu}\to {\alpha}_{\mathrm{LB}}^{\mathcal{N}}(\mathit{\mu}) has to be independent of \mathcal{N} in order to fulfill the efficiency requirements on the error bounds articulated before. An efficient algorithm for the computation of {\alpha}_{\mathrm{LB}}^{\mathcal{N}}(\mathit{\mu}) is given by the socalled Successive Constraint Method (SCM), widely analyzed in [1, 33, 34]. Moreover, the SCM algorithm  which is based on the successive solution of suitable linear optimization problems  has been developed for the special requirements of the RB method; it thus features an efficient OfflineOnline strategy, making the Online calculation complexity independent of \mathcal{N}  a fundamental requisite.
6.1.2 Error bounds
We define error estimators for the solution in the energy norm and for the output as
and
respectively. We next introduce the effectivities associated with these error estimators as
and
respectively. Clearly, the effectivities are a measure of the quality of the proposed estimator: for rigor, we shall insist upon effectivities ≥1; for sharpness, we desire effectivities as close to unity as possible. We can prove^{7}[1] that for any N=1,\dots ,{N}_{\mathrm{max}}, the effectivities satisfy
being defined in (4). It is important to observe that the effectivity upper bounds, (48) and (49), are independent of N, and hence stable with respect to RB refinement.
6.1.3 OfflineOnline for \parallel \stackrel{\u02c6}{e}(\mathit{\mu}){\parallel}_{X} computation
The error bounds of the previous section are of no utility without an accompanying OfflineOnline computational approach.
The computationally crucial component of all the error bounds of the previous section is \parallel \stackrel{\u02c6}{e}(\mathit{\mu}){\parallel}_{X}, the dual norm of the residual. To develop an OfflineOnline procedure we first expand the residual (40) according to (26) and (6):
If we insert (50) in (41) and apply linear superposition, we obtain
or
where {(\mathcal{C},v)}_{X}=f(v), \forall v\in {X}^{\mathcal{N}}, that is, \mathcal{C} is the Riesz representation of f, and {({\mathcal{L}}_{n}^{q},v)}_{X}={a}^{q}({\zeta}_{n}^{\mathcal{N}},v), \forall v\in {X}^{\mathcal{N}}, 1\le n\le N, 1\le q\le Q, that is, {\mathcal{L}}_{n}^{q} is the Riesz representation of {A}_{n}^{q}\in {({X}^{\mathcal{N}})}^{\prime} defined as {A}_{n}^{q}(v)={a}^{q}({\zeta}_{n}^{\mathcal{N}},v), \forall v\in {X}^{\mathcal{N}}. We denote the \mathcal{C}, {\mathcal{L}}_{n}^{q}, 1\le n\le N, 1\le q\le Q, as FE ‘pseudo’solutions, that is, solutions of ‘associated’ FE Poisson problems. We thus obtain
from which we can directly calculate the requisite dual norm of the residual through (43).
The OfflineOnline decomposition is now clear. In the Offline stage we form the μindependent quantities. In particular, we compute the FE ‘pseudo’solutions \mathcal{C}, {\mathcal{L}}_{n}^{q}, 1\le n\le {N}_{\mathrm{max}}, 1\le q\le Q, and store {(\mathcal{C},\mathcal{C})}_{X}, {(\mathcal{C},{\mathcal{L}}_{n}^{q})}_{X}, {({\mathcal{L}}_{n}^{q},{\mathcal{L}}_{{n}^{\prime}}^{{q}^{\prime}})}_{X}, 1\le n,{n}^{\prime}\le {N}_{\mathrm{max}}, 1\le q,{q}^{\prime}\le Q. The Offline operation count depends on {N}_{\mathrm{max}}, Q, and\mathcal{N}.
In the Online stage, given any ‘new’ value of μ  and {\Theta}^{q}(\mathit{\mu}), 1\le q\le Q, {u}_{Nn}^{\mathcal{N}}(\mathit{\mu}), 1\le n\le N  we simply retrieve the stored quantities {(\mathcal{C},\mathcal{C})}_{X}, {(\mathcal{C},{\mathcal{L}}_{n}^{q})}_{X}, {({\mathcal{L}}_{n}^{q},{\mathcal{L}}_{{n}^{\prime}}^{{q}^{\prime}})}_{X}, 1\le n,{n}^{\prime}\le N, 1\le q,{q}^{\prime}\le Q, and then evaluate the sum (51). The Online operation count, and hence also the marginal cost, is O({Q}^{2}{N}^{2})  and independent of\mathcal{N}.^{8}
6.2 Parabolic case
In this section we deal with a posteriori error estimation in the reduced basis context for affinely parametrized parabolic coercive PDEs. As for the elliptic case, to construct the a posteriori error bounds we need two ingredients. The first ingredient is the dual norm of the residual
where {r}_{N}(v;{t}^{k};\mathit{\mu}) is the residual associated with the RB approximation (33)
The second ingredient is a lower bound for the coercivity constant {\alpha}^{\mathcal{N}}(\mathit{\mu}), 0<{\alpha}_{\mathrm{LB}}^{\mathcal{N}}(\mathit{\mu})\le {\alpha}^{\mathcal{N}}(\mathit{\mu}), \forall \mathit{\mu}\in \mathcal{D}.
We can now define our error bounds in terms of these two ingredients; in fact, it can readily be proven [22, 31] that for all \mathit{\mu}\in \mathcal{D} and all N
where {\Delta}_{N}^{k}(\mathit{\mu})\equiv {\Delta}_{N}({t}^{k};\mathit{\mu}) and {\Delta}_{N}^{sk}(\mathit{\mu})\equiv {\Delta}_{N}^{s}({t}^{k};\mathit{\mu}) are given by
(We assume for simplicity that {u}^{\mathcal{N}0}\in {X}_{N}; otherwise there will be an additional contribution to {\Delta}_{N}^{k}(\mathit{\mu}).)
Even if based on the same components as in the elliptic case, now the ConstructionEvaluation procedure for the error bound is a bit more involved. The necessary computations for the Offline and Online stages  by construction rather similar to the elliptic case  are discussed in details, for example, in [24]. We consider here only the decomposition for the dual norm of the residual [31]. We first invoke duality, our RB expansion, the affine parametric dependence of a and m, and linear superposition to express
for 1\le k\le K, where {\varphi}_{Nn}^{k}(\mathit{\mu}):={u}_{Nn}^{k}(\mathit{\mu}){u}_{Nn}^{k1}(\mathit{\mu}) and {\mathcal{Q}}_{N}^{ff}={({z}^{f},{z}^{f})}_{X}{\mathcal{Q}}_{Nnq}^{fa}=2{({z}_{nq}^{a},{z}^{f})}_{X}1\le q\le {Q}_{a}1\le n\le N{\mathcal{Q}}_{Nnq}^{fm}=2{({z}_{nq}^{m},{z}^{f})}_{X}1\le q\le {Q}_{m}1\le n\le N{\mathcal{Q}}_{Nn{n}^{\prime}q{q}^{\prime}}^{aa}={({z}_{nq}^{a},{z}_{{n}^{\prime}{q}^{\prime}}^{a})}_{X}1\le q,{q}^{\prime}\le {Q}_{a}1\le n,{n}^{\prime}\le N{\mathcal{Q}}_{Nn{n}^{\prime}q{q}^{\prime}}^{am}=2{({z}_{nq}^{a},{z}_{{n}^{\prime}{q}^{\prime}}^{m})}_{X}1\le q\le {Q}_{a}1\le {q}^{\prime}\le {Q}_{m}1\le n,{n}^{\prime}\le N, and {\mathcal{Q}}_{Nn{n}^{\prime}q{q}^{\prime}}^{mm}={({z}_{nq}^{m},{z}_{{n}^{\prime}{q}^{\prime}}^{m})}_{X}1\le q,{q}^{\prime}\le {Q}_{m}1\le n,{n}^{\prime}\le N. Here the {z}^{f}{z}_{nq}^{a}{z}_{n{q}^{\prime}}^{m} are solutions to timeindependent and μindependent ‘Poisson’ problems: {({z}^{f},v)}_{X}=f(v)\forall v\in {X}^{\mathcal{N}}{({z}_{nq}^{a},v)}_{X}={a}^{q}({\xi}_{n},v)\forall v\in {X}^{\mathcal{N}}1\le n\le N1\le q\le {Q}_{a}, and {({z}_{n{q}^{\prime}}^{m},v)}_{X}={m}^{{q}^{\prime}}({\xi}_{n},v)\forall v\in {X}^{\mathcal{N}}1\le n\le N1\le {q}^{\prime}\le {Q}_{m}.
The ConstructionEvaluation decomposition is now clear. In the μindependent construction stage we find {z}^{f}, {z}^{a}, {z}^{m}, and the inner products {\mathcal{Q}}_{{N}_{\mathrm{max}}}^{ff}, {\mathcal{Q}}_{{N}_{\mathrm{max}}}^{fa}, {\mathcal{Q}}_{{N}_{\mathrm{max}}}^{fm}, {\mathcal{Q}}_{{N}_{\mathrm{max}}}^{aa}, {\mathcal{Q}}_{{N}_{\mathrm{max}}}^{mm}, and {\mathcal{Q}}_{{N}_{\mathrm{max}}}^{am} at (considerable) computational cost O({Q}_{a}^{\cdot}{Q}_{m}^{\cdot}{N}_{\mathrm{max}}^{\cdot}{\mathcal{N}}^{\cdot}). In the μdependent Evaluation stage  performed many times  we simply perform the sum (58) from the stored inner products in O({(1+{Q}_{m}N+{Q}_{a}N)}^{2}) operations per time step and hence O({(1+{Q}_{m}N+{Q}_{a}N)}^{2}K) operations in total. The crucial point, again, is that the cost and storage in the Evaluation phase  the marginal cost for each new value of μ  is independent of \mathcal{N}: thus we can not only evaluate our output prediction but also our rigorous output error bound very rapidly in the parametrically interesting contexts of realtime or manyquery investigation.
7 Extensions to more general problems
We now briefly discuss some extensions of the reduced basis methodology presented in Section 4 to address more general classes of problems, also to face industrial problems of a certain degree of complexity.
7.1 Noncompliant problems
For the sake of simplicity, we addressed in Section 4 the RB approximation of affinely parametrized coercive problems in the compliant case. We now consider the elliptic case and the more general noncompliant problem: given \mathit{\mu}\in \mathcal{D}, find
where u(\mathit{\mu})\in X satisfies
We assume that a is coercive and continuous (and affine, (6)) but not necessarily symmetric. We further assume that both ℓ and f are bounded functionals but we no longer require \ell =f.^{9} Following the methodology (and the notation) addressed in Section 4, we can readily develop an a posteriori error bound for {s}_{N}(\mathit{\mu}): by standard arguments [1, 2]
where and {\Delta}_{N}(\mathit{\mu}) is given by (46). We denote the method already illustrated as ‘primalonly’. Although for many outputs primalonly is perhaps the best approach (each additional output, and associated error bound, is a simple ‘addon’), this approach has two deficiencies:

(i)
we loose the ‘quadratic convergence’ effect (25) for outputs (unless \ell =f and a is symmetric);

(ii)
the effectivities {\Delta}_{N}^{s}(\mathit{\mu})/s(\mathit{\mu}){s}_{N}(\mathit{\mu}) may be unbounded: if \ell =f then we know, from (25), that s(\mathit{\mu}){s}_{N}(\mathit{\mu})\sim \parallel \stackrel{\u02c6}{e}(\mathit{\mu}){\parallel}_{X}^{2} and hence {\Delta}^{s}(\mathit{\mu})/s(\mathit{\mu}){s}_{N}(\mathit{\mu})\sim 1/\parallel \stackrel{\u02c6}{e}(\mathit{\mu}){\parallel}_{X}\to \infty as N\to \infty, that is, the effectivity of the output error bound (47) tends to infinity as (N\to \infty and) {u}_{{N}^{\mathit{pr}}}^{\mathcal{N}}(\mathit{\mu})\to {u}^{\mathcal{N}}(\mathit{\mu}). We may expect similar behavior for any ℓ ‘close’ to f: the failing is that (47) does not reflect the contribution of the test space to the convergence of the output.
The introduction of RB primaldual approximation will take care of the previous issue  and ensure a stable limit N\to \infty. We thus introduce the dual problem associated to ℓ, that reads as follows: find \psi (\mathit{\mu})\in X such that
ψ is denoted the ‘adjoint’ or ‘dual’ field. Let us define the RB spaces for the primal and the dual problem, respectively:
for 1\le {N}_{\mathit{pr}}\le {N}_{\mathit{pr},\mathrm{max}}1\le {N}_{\mathit{du}}\le {N}_{\mathit{du},\mathrm{max}}. For our purposes a single FE space suffices for both the primal and dual, even if in actual practice the FE primal and dual spaces may be different. The resulting RB approximation {u}_{{N}_{\mathit{pr}}}^{\mathcal{N}}\in {X}_{{N}_{\mathit{pr}}}^{\mathcal{N},\mathit{pr}}{\Psi}_{{N}_{\mathit{du}}}\in {X}_{{N}_{\mathit{du}}}^{\mathit{du}} solve
then, the RB output can be evaluated as [35]
where
are the primal and the dual residual. In particular, in the noncompliant case, the output error bound takes the form
We thus recover the ‘quadratic’ output effect; note that the OfflineOnline procedure is very similar to the ‘primalonly’ case, but now we need to do everything both for primal and dual; moreover, we need to evaluate both a primal and a dual residual for the a posteriori error bounds, but at a reasonable computational cost and by reusing the same computational framework built and set for the ‘primalonly’ approach. Error bounds related to the gradient of computed quantities, such as velocity and pressure in potential flows problems, have been addressed in [36]. For parabolic problems, the treatment of noncompliant outputs follows the same strategy; we only remark that the dual problem in this case shall evolve backward in time [31].
7.2 Nonaffine and noncoercive problems
In this section we introduce wider classes of problems to be treated with the reduced basis method: nonaffine problems and noncoercive problems in order to provide a general framework for the methodology. The reader interested in numerical applications may go directly to Section 8 without affecting the understanding of the subsequent sections.
7.2.1 Nonaffine problems
The assumption of affine parametric dependence  expressed by conditions (6) and (10)  is of fundamental importance in order to exploit the OfflineOnline stratagem and then minimize the marginal cost associated with each inputoutput evaluation. However, also nonaffine problems, that is, problems in which conditions (6) and (10) are not still valid, can be efficiently treated in the reduced basis framework. In this case, we rely on the Empirical Interpolation Method (EIM) [26, 37, 38], which is an interpolation method for parametric functions based on adaptively chosen interpolation points and global shape functions.
In practice, if the problem is not affinely parametrized (for example, when the geometric transformation (12) has a more general expression than in (13), or the physical coefficients appearing in the tensor {\mathbf{K}}_{o,l} are nonaffine functions of x and μ), the parametrized tensors in (16) and (17) depend both on the parameter μ and the spatial coordinate x. In this case, the operators can not be expressed as in (18)  and ultimately as (6) and (10). Hence, we need an additional preprocessing, before the FE assembling stage, in order to recover the affinity assumption. According to EIM, each component {K}_{ij}^{l}(\mathbf{x},\mathit{\mu}) is approximated by an affine expression given by
the same approximation is set up for the components of the {M}_{ij}^{l}(\mathbf{x},\mathit{\mu}) tensor in the parabolic case:
All the coefficients {\beta}_{k}^{ijl}’s, {\gamma}_{k}^{ijl}’s, {\eta}_{k}^{ijl}’s and {\varphi}_{k}^{ijl}’s are efficiently computable scalar functions and the error terms are guaranteed to be under some tolerance,
In this way, we can identify the μdependent coefficients in the developments (62), (63) as the coefficients {\Theta}_{a}^{q}(\mathit{\mu}) (resp. {\Theta}_{m}^{q}(\mathit{\mu})) in (6) and (10), that is, {\Theta}_{a}^{q}(\mathit{\mu})={\beta}_{k}^{ijl}(\mathit{\mu}), {\Theta}_{m}^{q}(\mathit{\mu})={\gamma}_{k}^{ijl}(\mathit{\mu}), being q a condensed index for (i,j,k,l), while the μindependent functions will be treated as prefactors in the integrals which give the μindependent bilinear forms {a}^{q}(w,v) (resp. {m}^{q}(w,v)).
We refer the reader to [26] and [39] for details on EIM procedures for nonaffine problems. The nonaffine treatment is really important since many problems involving more complex geometrical parametrizations and/or more complex physical instances (that is, nonhomogeneous or nonisotropic properties in materials) are hold by nonaffine parametric dependence.
7.2.2 Noncoercive problems
The reduced basis framework can be effectively applied also to problems involving operators which do not satisfy the (quite strict) coercivity assumption [18]; this is the case, for example, of the (Navier)Stokes problem, where stability is in fact fulfilled in the more general sense of the infsup condition [29]. For the sake of simplicity, we restrict our considerations to the elliptic (scalar) case (2)(3). We assume that the (parametrized) bilinear form a(\cdot ,\cdot ;\mathit{\mu}):{X}^{1}\times {X}^{2}\to \mathbb{R} is continue and satisfies the more general infsup condition:
In this case the finite element (and thus the subsequent reduced basis) approximation is based on a more general PetrovGalerkin approach. Given two FE spaces {X}^{1,\mathcal{N}}\subset {X}^{1}{X}^{2,\mathcal{N}}\subset {X}^{2}, the FE approximation {u}^{\mathcal{N}}(\mathit{\mu})\in {X}^{1,\mathcal{N}} satisfies
and the output can be evaluated as^{10}
In order to have a stable FE approximation, we require that exists {\beta}_{0}\ge 0 such that
This condition can be reformulated in terms of the socalled inner supremizer operator {T}^{\mathit{\mu}}:{X}^{1,\mathcal{N}}\to {X}^{2,\mathcal{N}}
by CauchySchwarz inequality and taking v={T}^{\mathit{\mu}}w, we have that for any w\in {X}^{1}
The reduced basis approximation inherits the same PetrovGalerkin structure; in order to guarantee its stability, we need to introduce two different spaces (note that the second is μdependent):
for 1\le N\le {N}_{\mathrm{max}}; then {u}_{N}^{\mathcal{N}}(\mathit{\mu})\in {X}_{N}^{1} satisfies
and
If we define
we obtain
which is the analogue of (24) for noncoercive problems. In this case we can show that {\beta}_{N}(\mathit{\mu})\ge {\beta}_{\mathcal{N}}(\mathit{\mu})\forall \mathit{\mu}\in \mathcal{D}; this property, which yields the stability of the RB approximation, is not automatically satisfied by a (simple) Galerkin formulation; hence, we need to enforce this property through the introduction of a PetrovGalerkin framework. Observe that approximation is provided by {X}_{N}^{1} and stability (through {\beta}_{N}) by {X}_{N}^{2,\mathit{\mu}}.
The OfflineOnline computational strategem, as well as the a posteriori error estimation, are based on the same arguments described in Section 6 for the coercive case; we remark that also the inner supremizer operator can be written in the affine form under the affinity assumption (6) on a(\cdot ,\cdot ;\mathit{\mu}). In particular, from (66), we can easily prove that
where {\beta}_{\mathcal{N}}^{LB}(\mathit{\mu}) is a lower bound of infsup constant (65) and can be computed by means of the same SCM procedure used for the lower bound of coercivity constants [34, 40].
An interesting case of noncoercive problems is given by Stokes problems where approximation stability is guaranteed by the fullfillment of an equivalent infsup stability condition on the pressure term with RB approximation spaces properly enriched [41, 42]. Error bounds can be developed in the general noncoercive framework [40] or with a penalty setting [43].
8 Working examples
Reduced basis methods have already been and may be applied in many problems of industrial interest: material sciences and linear elasticity [17, 44–46], heat and mass transfer [47–50], acoustics [51], potential flows [36], microfluid dynamics [40], electromagnetism [52]; for examples of implementation of some worked problems in the mentioned fields, see [53, 54] for a versatile setting.
In many of these problems there are physical or engineering parameters which characterize the problem but also geometrical parameters holding a Cartesian geometrical setting; this configuration is quite typical for industrial devices, and plants and related constructions and products. More complex geometrical parametrizations will be briefly considered in Section 9, involving, for example, biomedical devices and/or aerodynamic shapes.
We discuss in this section^{11} three working examples of industrial interest, dealing with different heat or mass transfer problems. The first example deals with forced steady heat conduction/convection; the second application deals with a transient heat treatment, while the third one is an example of a (simple) coupled problem, dealing with the transient evolution of the concentration field near the surface of a body immersed into a fluid flowing across a channel. All numerical details concerning the construction of RB spaces and computational costs are reported in Table 1.
8.1 A ‘CouetteGraetz’ conductionconvection problem
This problem deals with forced steady heat convection combined with heat conduction in a straight duct, whose walls can be kept at fixed temperature or insulated or characterized by heat exchange. The flow has an imposed temperature at the inlet and a known convection field (a Couette flow, that is, a given linear velocity profile [55]). From the engineering point of view, this example describes a class of heat transfer problems in fluidic devices with a versatile configuration. In particular, Péclet number as a measure of axial transport velocity field (modeling the physics of the problem) and the length of the noninsulated portion of the duct are only two of the possible parameters to be varied in order to extract average temperatures. Also discontinuities in Neumann boundary conditions (different heat fluxes) and thermal boundary layers are interesting phenomena to be studied.
We consider the physical domain {\Omega}_{o}(\mathit{\mu}) shown in Figure 3; all lengths are nondimensionalized with respect to a unity length \tilde{h} (dimensional channel width); moreover, let us denote \tilde{k} the dimensional (thermal) conductivity coefficient for the air flowing in the duct, \tilde{\rho} its density and {\tilde{c}}_{p} the specific heat capacity under constant pressure. We introduce the (thermal) diffusion coefficient \tilde{D}=\tilde{k}/\tilde{\rho}{\tilde{c}}_{p}, as well as the Péclet number, given by the ratio \mathrm{Pe}=\tilde{U}\tilde{h}/\tilde{D}, being \tilde{U} the reference dimensional velocity for the convective field. We consider here P=2 parameters: {\mu}_{1} is the length of the noninsulated bottom portion of the duct (unity heat flux), while {\mu}_{2} represents the Péclet number; the parameter domain is given by \mathcal{D}=[1,10]\times [0.1,100].
The solution u(\mathit{\mu}), defined as the nondimensional temperature u(\mathit{\mu})=(\tau {\tau}_{in})/{\tau}_{in} (where τ is the dimensional temperature, {\tau}_{in} is the dimensional temperature of the air at the inflow and in the first portion of the duct) satisfies the following steady advectiondiffusion equation:
with summation (i,j=1,2) over repeated indices; hence, we impose the temperature at the top walls and in the ‘inflow’ zone of the duct (Γ_{6}), while we consider an insulated wall (zero heat flux on Γ_{1} and Γ_{3}) or heat exchange at a fixed rate (that is, unity on Γ_{2}) on other boundaries. We note that the forced convection field is given by a linear velocity profile {x}_{2}\tilde{U} (Couette type flow). The output of interest is the average temperature of the fluid on the noninsulated portion of the bottom wall of the duct, given by
This problem is then mapped to the fixed reference domain Ω and discretized by piecewise linear finite elements; the dimension of the corresponding space is \mathcal{N}=5\text{,}433. Since we are in a noncompliant case, a further dual problem has to be solved in order to obtain better output evaluations and related error bounds, see Section 7.1. In particular, we show in Figure 4 the lower bound of the coercivity constant of the bilinear form associated to our problem.
We plot in Figure 5 the convergence of the greedy algorithm for the primal and the dual problem, respectively; with a fixed tolerance {\epsilon}_{\mathrm{tol}}^{\ast}={10}^{2}, {N}_{\mathit{pr},\mathrm{max}}=21 and {N}_{\mathit{du},\mathrm{max}}=30 basis have been selected, respectively. In Figure 6 the selected parameter values {S}_{{N}_{\mathit{pr}}} for the primal and {S}_{{N}_{\mathit{du}}} for the dual problems, respectively, are shown; in each case {\Xi}_{\mathrm{train}} is a uniform random sample of size {n}_{\mathrm{train}}=1\text{,}000. Moreover, in Figure 7 some representative solutions (computed for N={N}_{\mathrm{max}}) for selected values of parameters are reported.
The thermal boundary layer looks very different in the four cases. In particular, higher variations of temperature, as well as large gradients along the lower wall  are remarkable for higher Péclet number, when forced convection dominates steady conduction; moreover, the standard behavior of boundary layer width  usually given by O(1/\mathrm{Pe})  is captured correctly. In Figure 8 the RB evaluation (for N={N}_{\mathrm{max}}) of the output of interest is reported as a function of the parameters, as well as the related error bound. As we can see, for low values of {\mu}_{2} (Péclet number) the dependence of the output on {\mu}_{1} (geometrical aspect) is rather modest; for high values of {\mu}_{2}, instead, the output shows a larger variations wih respect to {\mu}_{1}. In the same way, for longer/shorter channels the dependence on the Péclet number is higher/lower.
8.2 A transient thermal treatment problem
This problem considers a transient thermal treatment on a sectional slice of a railroad rail. Heat treatment is a method used to alter the physical, and sometimes chemical, properties of a material, which involves the use of heating or chilling, normally to extreme temperatures, to achieve a desired result such as hardening or softening of a material. Heat treatment techniques include annealing, case hardening, precipitation strengthening, tempering, and quenching. Although the most common application is metallurgical, heat treatments are also used in the manufacturing of many other materials.
We consider here P=2 parameters: {\mu}_{2} is a geometrical parameter representing the thickness of the web connecting the top and the bottom of the railroad rail slice (see Figure 9), while {\mu}_{1} denotes the nondimensional Biot number, given by \mathrm{Bi}\equiv {\tilde{h}}_{c}\tilde{d}/\tilde{k}. We assume that the railroad rail slice has thermal conductivity \tilde{k} and we characterize the heat transfer coefficient between the railroad section and the fluid surrounding the railroad rail slice itself by a heat transfer coefficient {\tilde{h}}_{c}; moreover, \tilde{d} denotes the height of the slice of the railroad rail. The parameter domain is given by \mathcal{D}=[0.01,10]\times [0.02,0.2].
The (nondimensional) temperature distribution is denoted u(\mathit{\mu}) (the dependence of time is omitted for sake of simplicity) and is defined in terms of dimensional temperature as u(\mathit{\mu})=(\tau {\tau}_{\mathrm{init}})/({\tau}_{\mathrm{env}}{\tau}_{\mathrm{init}}) where τ is the dimensional temperature, {\tau}_{\mathrm{init}} the initial dimensional temperature (at t=0) and {\tau}_{\mathrm{env}} is the dimensional temperature of the fluid surrounding the railroad slice (at every time) and the (asymptotic) temperature at the end of the treatment.
The governing equation for u(\mathit{\mu},t) is the following timedependent linear PDE: for t\in [0,T]
The inhomogeneous Robin conditions correspond to the heat exchange between the railroad rail slice section and the fluid used for the thermal treatment. Here the control input g(t) is a function of time t; the problem considers any squareintegrable function for g(t). In practice, the PDE is replaced by a discretetime (backward Euler [30]) approximation with timesteps of size \Delta t=0.005. Note that the final time is T=0.75 and that the number of timesteps is {n}_{t}=150; the spatial discretization is made by piecewise linear finite elements, whose corresponding space dimension is \mathcal{N}=16\text{,}737. Our output of interest is the average temperature all over the piece of railroad rail slice, given by
where h(t) is a function of time t; the problem considers any function (including Dirac delta) for h(t).
In Figure 10 we plot the lower bound of the coercivity constant of the bilinear form associated to the problem. As in the previous case, a further dual problem has to be solved in order to obtain better output evaluations and related error bounds. We show in Figure 11 the convergence of the greedy algorithm for the primal and the dual problem, respectively; with a fixed tolerance {\epsilon}_{\mathrm{tol}}^{\ast}={10}^{2}, {N}_{\mathit{pr},\mathrm{max}}=22 and {N}_{\mathit{du},\mathrm{max}}=6 basis have been selected, respectively.
In Figures 12 and 13 some representative solutions for selected values of parameters are reported, for both t=\Delta t and t=T. In particular, two different heat treatments have been investigated: heating and cooling process. In the first case, we have imposed a thermal flux g(t)=10t, while in the second case g(t)=10t. We can remark more sensible variations of temperature all over the body for larger values of {\mu}_{1} (Biot number); moreover, the behavior of the temperature changes strongly between narrower and larger configurations.
Concerning the output (67), two cases have been taken into account: a distributed (in time) output  corresponding to h(t)=1  given by the integral of the temperature in time and space, and a concentrated (in time) output  corresponding to h(t)=\delta (t)  given by the spatial integral of temperature at each timestep. In Figures 14 and 15 the RB evaluation (for N={N}_{\mathrm{max}}) of these two outputs of interest are reported, as well as the related error bounds. Higher values of the output are obtained with larger values of the two parameters; moreover, keeping the geometry fixed, variations w.r.t. Biot number in output values are of about one order of magnitude.
8.3 A transient (coupled) diffusiontransport problem around a cylinder
The problem represents the transient evolution of a concentration field near the surface of a body (a twodimensional cylinder section) immersed into a fluid flowing into a channel. The mass (for example, of oxygen or drug) can be released or absorbed through the body surface within the surrounding fluid. This is a wellknown mass transfer problem in the design and sizing of substances diffusers used for many industrial, civil and, more recently, biomedical applications (drug and/or oxygen release, stent design); in the same way, it can be seen as an heat transfer problem through an heat exchanger [56].
The problem is described by the coupling of an unsteady mass (or heat) transfer phenomenon (or substance release) by diffusion (or conduction) into a body and by transport (or convection) phenomena inside the field where the fluid is flowing; the transport field is given, for example, by a potential solution (see, for example, [55]).
We consider the physical domain {\Omega}_{o}(\mathit{\mu}) shown in Figure 16, nondimensionalized with respect to \tilde{R}, the unit radius of the cylinder immersed in the fluid. Moreover, we denote \tilde{D} the dimensional mass diffusion coefficient, \tilde{U} a reference dimensional velocity for transport field, and we introduce the Péclet number as \mathrm{Pe}=\tilde{U}\tilde{R}/\tilde{D}, while time is nondimensionalized by the quantity {\tilde{R}}^{2}/\tilde{D}.
In this problem the boundary segments Γ_{1}, Γ_{7} are curved (all other boundary segments are straight lines) and they represent the semicircular section of the cylinder immersed in the flow (thanks to symmetry the problem can be simplified by considering just ‘half’ configuration). The segments Γ_{1}, Γ_{7} are given by the parametrization
where for Γ_{1}, , for Γ_{7}, .
We consider here only one parameter {\mu}_{1}, the Péclet number, which is given by the ratio between the transport and diffusion terms; the parameter domain is given by \mathcal{D}=[0.1,100]. The solution is characterized by the (adimensional) concentration u(\mathit{\mu},t)=(c{c}_{\mathrm{init}})/{c}_{\mathrm{inlet}}, being c the dimensional concentration, {c}_{\mathrm{init}} the initial dimensional concentration (at t=0), and _{inlet} the dimensional concentration imposed at the inflow (at every time step). The governing equation for u(\mathit{\mu},t) is the following timedependent linear PDE: for t\in [0,T]
the control input g(t) is a (squareintegrable) function of time t. The potential velocity field (ideal inviscid fluid) is given in polar coordinates by ({v}_{r},{v}_{\theta}), being [55]
where r=\sqrt{{x}_{1}^{2}+{x}_{2}^{2}}{r}_{0}=\tilde{R}=1 and \theta =arcsin({x}_{2}/\sqrt{{x}_{1}^{2}+{x}_{2}^{2}}).
In practice, the PDE is replaced by a discretetime (backward Euler) approximation with time steps of size \Delta t=0.01; note that the final time is T=1 and that the number of time steps is {n}_{t}=100. The spatial discretization is made by piecewise linear finite elements, whose corresponding space dimension is \mathcal{N}=13\text{,}976.
Our output of interest is the average concentration on the cylinder surface, given by
where h(t) may be a function of time t. As for the two previous cases, we deal with a noncompliant problem, for which the dual problem has to be introduced and solved. In Figure 17 we plot the lower bound of the coercivity constant of the bilinear form associated to the problem. We show in Figure 18 the convergence of the greedy algorithm for the primal and the dual problem, respectively; with a fixed tolerance {\epsilon}_{\mathrm{tol}}^{\ast}={10}^{2}, {N}_{\mathit{pr},\mathrm{max}}=17 and {N}_{\mathit{du},\mathrm{max}}=17 basis have been selected, respectively.
In Figures 19 and 20 some representative solutions at time t=T, for selected values of the parameter, show the physical convective phenomena at different Péclet numbers, to underline the different nature of the problem: from a diffusion dominated (lower Péclet number) to a transport dominated (higher Péclet number) problem. Two different cases have been analyzed, concerning the mass transfer through the cylindrical body: in the first case, we have imposed a mass flux g(t)=10t (substance release by the cylinder), while in the second case g(t)=10t (substance absorption through the cylinder). In any case, higher values of concentration and higher gradients are obtained for larger Peclet numbers: absorption or release are more effective when transport dominates over diffusion.
In the following Figure 21 the behavior of the (RB evaluation of) output (67) is shown, as well as the related error bounds (magnified by a factor 10), in the case of heat emission (Figure 19); we have considered a concentrated (in time) output (corresponding to h(t)=\delta (t)), given by the (spatial) average of the concentration on the cylinder at each timestep. According to the behavior of solutions, we obtain higher values of the output when {\mu}_{1} increases.
8.4 Computational aspects
We conclude this section by discussing some computational aspects related to the three numerical examples presented above, and showing how reduced basis techniques allow a substantial reduction of computational work. We recall that, in order to obtain a rapid and reliable procedure, we are interested in (i) the minimization of the (marginal) cost associated with each inputoutput evaluation as well as in (ii) the possibility to provide a certification of each reduced approximation, both with respect to the corresponding finite element approximation.
All the details are reported in Table 1. Compared to the corresponding FE approximation, RB Online evaluations of field variables and outputs enable a computational speedup, defined as \mathcal{S}={t}_{\mathit{FE}}/{t}_{\mathit{RB}}^{\mathit{online}}, of about two orders of magnitude. In particular, the average time over 2,500 Online output evaluations is of 0.107 for the first CouetteGraetz problem, of 0.198 for the second heat treatment problem, as well as of 0.158 for the third diffusiontransport problem. Note that the times related to the RB Online evaluation take into account also the a posteriori error estimation for solution and output. This great computational advantage is due, basically, to the reduction in linear system dimensions, and finally in the huge dimensional reduction  N vs\mathcal{N}  between RB spaces and corresponding FE spaces. For the three cases considered, this ratio goes from 260 (first case) to 820 (third case). Thanks to the GramSchmidt orthonormalization, the condition number of the RB matrices is limited to \mathcal{O}({10}^{2}), while without this procedure it will go to \mathcal{O}({10}^{14}).
In the end, we take into account also the time spent for the Offline construction and storage; this allows to determine the breakeven point, given by {\mathcal{Q}}_{\mathit{BE}}={t}_{\mathit{RB}}^{\mathit{offline}}/{t}_{\mathit{FE}}. In particular, we obtain a breakeven point of \mathcal{O}({10}^{2}) in the three cases, which can be considered acceptable whenever interested either in the realtime context, or in the limit of many queries. The performances described in Table 1 are valid even if we consider a higher number of parameters (for example, with P between 10 and 25, see [57]).
9 Perspectives and ongoing research
We end this review paper dedicated to applications of reduced basis method in an industrial framework by putting current methodology development in perspective.
9.1 Extension to complex problems
Growing research areas are devoted to the following kind of problems.
(i) Nonlinear problems: the reduced basis framework and related model–reduction approaches are well developed for linear parametrized partial differential equations. They can be effectively applied also to nonlinear problems [37, 58, 59], even if this in turn introduces both numerical and theoretical complications, and many open research issues are still to be faced. Classical problems arising in applied sciences are, for example, NavierStokes/Boussinesq and Burgers’ equations in fluid mechanics [16–18, 47, 48, 60, 61] and nonlinear elasticity in solid mechanics.
First of all, computational complexity is increasing at both the Offline and the Online stage: we need to solve nonlinear problems of big dimension O(\mathcal{N}) during the RB space generation, as well as nonlinear problems of reduced dimension O(N) for each Online evaluation; in both the cases, classical iterative procedure  such as fixed point or Newtontype algorithms  can be used. A posteriori error bounds introduced for linear problems can be effectively extended to steady nonlinear problems (see for example, [62] for steady incompressible NavierStokes equations). However, the most important challenge deals with the reliability and/or the certification of the methodology in the unsteady  parabolic  problems [23, 63]: in these cases exponential instability seriously compromises a priori and a posteriori error estimates, yielding to bounds which are limited to modest (final) times and modest Reynolds numbers. More precisely, stability considerations limit the product of the final time and the Reynolds number [64].
(ii) Problems dealing with (homogeneous or even heterogeneous) couplings in a multiphysics setting and based on domain decomposition techniques: a domain decomposition approach [29, 65] combined with reduced basis method has been successfully applied in [27, 28, 66] and further extensions are foreseen [67]. A coupled multiphysics setting has been proposed for simple fluidstructure interaction problems [68, 69].
(iii) Optimal control [70–73], shape optimization, inverse and design problems [74, 75] as manyquery applications have been and are subject to extensive research, which is of interest also in an industrial context. One of the main goals of this field is the study of efficient techniques to deal with geometrical parameters, in order to keep the number of parameters reasonable but also to guarantee versatility in the parametrization in order to treat and represent complex shapes. Recent works [76–79] deal with freeform deformation techniques combined with empirical interpolation in biomedical and aerodynamic problems.
(iv) Another growing field is related with the development and application of the reduced basis methodology to the quantification of uncertainty [24, 80, 81].
9.2 Efficiency improvement in RB methodology
The efforts are also aimed at improving the computational performance in threedimensional settings to have a more efficient implementation of the Offine ‘construction stage’ (for example, on highperformance parallel supercomputers) and more and more attractive realtime applications such as the ones currently available on smartphones [82].
Improvements in the efficiency of parameters space exploration are also crucial; see, for example, modified greedy algorithms and combined adaptive techniques [83], such as ‘hp’ RB method [84, 85]. At the same time, (i) improvements in the a posteriori error bounds for nonaffine problems [38]; (ii) reduction of the complexity of the parametrized operators and more efficient estimation of lower bounds of stability factors (that is, coercivity or infsup constants) for complex nonaffine problems [86]; or (iii) more specialized RB spaces [87] are under investigation.
Footnotes
^{1}This assumption will greatly simplify the presentation while still exercising most of the important RB concepts; furthermore, many important engineering problems are in fact ‘compliant’.
^{2}These regions can represent different material properties, but they can also be used for algorithmic purposes to ensure wellbehaved mappings.
^{3}Here, for 1\le l\le {L}_{\mathrm{dom}}, {\mathcal{K}}_{o,l}:\mathcal{D}\to {\mathbf{R}}^{3\times 3} is a given SPD matrix (which in turn ensures coercivity of the bilinear form): the upper 2\times 2 principal submatrix of {\mathcal{K}}_{o,l} is the usual tensor conductivity/diffusivity; the (3,3) element of {\mathcal{K}}_{o,l} represents the identity operator (‘mass matrix’) and is equal to {\mathcal{M}}_{o,l}; and the (3,1), (3,2) (and (1,3), (2,3)) elements of {\mathcal{K}}_{o,l}  which we can choose here as zero thanks to the current restriction to symmetric operators  permit first derivative terms to take into consideration transport/convective terms.
^{4}Under the coercivity and the symmetry assumptions, the bilinear form a(\cdot ,\cdot ;\mathit{\mu}) defines a (energy) scalar product given by {((w,v))}_{\mathit{\mu}}:=a(w,v;\mathit{\mu})\forall w,v\in X; the induced energy norm is given by .
^{5}Clearly the accuracy and cost of the a posteriori error estimator {\Delta}_{N}(\mathit{\mu}) are crucial to the success of the greedy algorithm.
^{6}As we assumed that the bilinear form is coercive and the FE approximation spaces are conforming, it follows that {\alpha}^{\mathcal{N}}(\mathit{\mu})\ge \alpha (\mathit{\mu})\ge {\alpha}_{0}>0, \forall \mathit{\mu}\in \mathcal{D}.
^{7}Similar results can be obtained for the a posteriori error bounds in the X norm.
^{8}It thus follows that the a posteriori error estimation contribution to the cost of the greedy algorithm of Section 5 is O(Q{N}_{\mathrm{max}}{\mathcal{N}}^{\cdot})+O({Q}^{2}{N}_{\mathrm{max}}^{2}\mathcal{N})+O({n}_{\mathrm{train}}{Q}^{2}{N}_{\mathrm{max}}^{3}): we may thus choose \mathcal{N} and {n}_{\mathrm{train}}independently (and large).
^{9}Typical output fuctionals correspond to the ‘integral’ of the field u(\mathit{\mu}) over an area or line (in particular, boundary segment) in \overline{\Omega}. However, by appropriate lifting techniques, ‘integrals’ of the flux over boundary segments can also be considered.
^{10}We pursue here just a primal approximation, however we can readily extend the approach to a primaldual formulation as described for coercive problems in Section 7.1.
^{11}All over the section, {\Omega}_{o}(\mathit{\mu}) denotes the original (physical) domain, whose generic point is indicated as \mathbf{x}=({x}_{1},{x}_{2}); for the sake of simplicity, we formulate all the problems in the original domain, but remove all the subscripts _{ o }. Moreover, a tilde \phantom{\rule{0.2em}{0ex}}\stackrel{~}{}\phantom{\rule{0.2em}{0ex}} denotes dimensional quantities, while the absence of a tilde signals a nondimensional quantity.
References
Rozza G, Huynh P, Patera A: Reduced basis approximation and a posteriori error estimation for affinely parametrized elliptic coercive partial differential equations. Arch. Comput. Methods Eng. 2008, 15: 229–275. 10.1007/s1183100890199
Patera, A., Rozza, G.: Reduced Basis Approximation and a posteriori Error Estimation for Parametrized Partial Differential Equations. Version 1.0, MIT. http://augustine.mit.edu (2006)
Prud’homme C, Rovas D, Veroy K, Maday Y, Patera A, Turinici G: Reliable realtime solution of parametrized partial differential equations: reducedbasis output bounds methods. J. Fluids Eng. 2002, 124: 70–80. 10.1115/1.1448332
Porsching TA: Estimation of the error in the reduced basis method solution of nonlinear equations. Math. Comput. 1985,45(172):487–496. 10.1090/S00255718198508049370
Ito K, Ravindran S: A reducedorder method for simulation and control of fluid flow. J. Comput. Phys. 1998,143(2):403–425. 10.1006/jcph.1998.5943
Almroth BO, Stern P, Brogan FA: Automatic choice of global shape functions in structural analysis. AIAA J. 1978, 16: 525–528. 10.2514/3.7539
Noor A: Recent advances in reduction methods for nonlinear problems. Comput. Struct. 1981, 13: 31–44. 10.1016/00457949(81)901061
Noor A: On making large nonlinear problems small. Comput. Methods Appl. Mech. Eng. 1982, 34: 955–985. 10.1016/00457825(82)900962
Fink JP, Rheinboldt WC: On the error behavior of the reduced basis technique for nonlinear finite element approximations. Z. Angew. Math. Mech. 1983, 63: 21–28. 10.1002/zamm.19830630105
Porsching TA, Lee MYL: The reducedbasis method for initial value problems. SIAM J. Numer. Anal. 1987, 24: 1277–1287. 10.1137/0724083
Barrett A, Reddien G: On the reduced basis method. Z. Angew. Math. Mech. 1995,75(7):543–549. 10.1002/zamm.19950750709
Rheinboldt WC: On the theory and error estimation of the reduced basis method for multiparameter problems. Nonlinear Anal. 1993,21(11):849–858. 10.1016/0362546X(93)900503
Gunzburger, M.D.: Finite Element Methods for Viscous Incompressible Flows. Academic Press, (1989)
Ito K, Ravindran S: A reduced basis method for control problems governed by PDEs. In Control and Estimation of Distributed Parameter System Edited by: Desch W., Kappel F., Kunisch K.. 1998, 153–168.
Ito K, Ravindran S: Reduced basis method for optimal control of unsteady viscous flows. International Journal of Computational Fluid Dynamics 2001,15(2):97–113. 10.1080/10618560108970021
Peterson J: The reduced basis method for incompressible viscous flow calculations. SIAM J. Sci. Stat. Comput. 1989,10(4):777–786. 10.1137/0910047
Nguyen, N.C., Veroy, K., Patera, A.T.: Certified realtime solution of parametrized partial differential equations. In: Yip, S. (ed.) Handbook of Materials Modeling, pp. 1523–1558. Springer (2005)
Veroy, K., Prud’homme, C., Rovas, D.V., Patera, A.: A posteriori error bounds for reduced basis approximation of parametrized noncoercive and nonlinear elliptic partial differential equations. In: Proceedings of the 16th AIAA Computational Fluid Dynamics Conference (2003). Paper 2003–3847
Maday Y, Patera A, Turinici G: A priori convergence theory for reducedbasis approximations of singleparameter elliptic partial differential equations. J. Sci. Comput. 2002,17(1–4):437–446.
Buffa, A., Maday, Y., Patera, A., Prud’homme, C., Turinici, G.: A priori convergence of the greedy algorithm for the parametrized reduced basis. M2AN Math. Model. Numer. Anal. (2009), submitted
Holmes P, Lumley J, Berkooz G: Turbulence, Coherent Structures, Dynamical Systems and Symmetry. Cambridge University Press, UK; 1996.
Haasdonk B, Ohlberger M: Reduced basis method for finite volume approximations of parametrized linear evolution equations. M2AN Math. Model. Numer. Anal. 2008,42(2):277–302. 10.1051/m2an:2008001
Nguyen N, Rozza G, Patera A: Reduced basis approximation and a posteriori error estimation for the timedependent viscous Burgers’ equation. Calcolo 2009,46(3):157–185. 10.1007/s100920090005x
Nguyen N, Rozza G, Huynh P, Patera A: Reduced basis approximation and a posteriori error estimation for parametrized parabolic PDEs; application to realtime Bayesian parameter estimation. In LargeScale Inverse Problems and Quantification of Uncertainty. Edited by: Biegler L., Biros G., Ghattas O., Heinkenschloss M., Keyes D., Mallick B., Marzouk Y., Tenorio L., van Bloemen Waanders B., Willcox K.. John Wiley & Sons, Ltd, UK; 2010:151–178. Chap. 8 Chap. 8
Balmes E: Parametric families of reduced finite element models: theory and applications. Mech. Syst. Signal Process. 1996,10(4):381–394. 10.1006/mssp.1996.0027
Barrault M, Maday Y, Nguyen N, Patera A: An ‘empirical interpolation’ method: application to efficient reducedbasis discretization of partial differential equations. C.R. Math. Acad. Sci. Paris, Series I 2004,339(9):667–672. 10.1016/j.crma.2004.08.006
Løvgren AE, Maday Y, Rønquist EM: A reduced basis element method for the steady Stokes problem. M2AN Math. Model. Numer. Anal. 2006,40(3):529–552. 10.1051/m2an:2006021
Løvgren AE, Maday Y, Rønquist EM: The Reduced Basis Element Method for Fluid Flows. In Analysis and Simulation of Fluid Dynamics. Advances in Mathematical Fluid Dynamics. Birkhäuser, Boston; 2007:129–154.
Quarteroni, A., Valli, A.: Numerical Approximation of Partial Differential Equations. SpringerVerlag, (1994)
Quarteroni, A.: Numerical Models for Differential Problems, Series MS&A, vol. 2 Springer (2009)
Grepl M, Patera AT: A posteriori error bounds for reducedbasis approximations of parametrized parabolic partial differential equations. M2AN Math. Model. Numer. Anal. 2005, 39: 157–181. 10.1051/m2an:2005006
Binev, P., Cohen, A., Dahmen, W., Devore, R., Petrova, G., Wojtaszczyk, P.: Convergence rates for greedy algorithms in reduced basis methods. (2010), in preparation
Huynh P, Rozza G, Sen S, Patera A: A successive constraint linear optimization method for lower bounds of parametric coercivity and infsup stability costants. C. R. Acad. Sci. Paris, Series I 2005, 345: 473–478.
Huynh P, Knezevic D, Chen Y, Hesthaven J, Patera A: A naturalnorm successive constraint method for infsup lower bounds. Comput. Methods Appl. Mech. Eng. 2010,199(29–32):1963–1975. 10.1016/j.cma.2010.02.011
Pierce N, Giles M: Adjoint recovery of superconvergent functionals from PDE approximations. SIAM Rev. 2000,42(2):247–264. 10.1137/S0036144598349423
Rozza G: Reduced basis approximation and error bounds for potential flows in parametrized geometries. Commun. Comput. Phys. 2011, 9: 1–48.
Grepl M, Maday Y, Nguyen N, Patera A: Efficient reducedbasis treatment of nonaffine and nonlinear partial differential equations. ESAIM Math. Modelling Numer. Anal. 2007,41(3):575–605. 10.1051/m2an:2007031
Eftang J, Grepl M, Patera A: A posteriori error bounds for the empirical interpolation method. C.R. Math. Acad. Sci. Paris, Series I 2010,348(9–10):575–579. 10.1016/j.crma.2010.03.004