 Research
 Open Access
 Published:
Stabilitypreserving model order reduction for linear stochastic Galerkin systems
Journal of Mathematics in Industryvolume 9, Article number: 10 (2019)
Abstract
Mathematical modeling often yields linear dynamical systems in science and engineering. We change physical parameters of the system into random variables to perform an uncertainty quantification. The stochastic Galerkin method yields a larger linear dynamical system, whose solution represents an approximation of random processes. A model order reduction (MOR) of the Galerkin system is advantageous due to the high dimensionality. However, asymptotic stability may be lost in some MOR techniques. In Galerkintype MOR methods, the stability can be guaranteed by a transformation to a dissipative form. Either the original dynamical system or the stochastic Galerkin system can be transformed. We investigate the two variants of this stabilitypreserving approach. Both techniques are feasible, while featuring different properties in numerical methods. Results of numerical computations are demonstrated for two test examples modeling a mechanical application and an electric circuit, respectively.
Introduction
Numerical simulation of mathematical models represents the main issue in scientific computing. We consider linear dynamical systems, which play an important role in mechanics and electrical engineering, for example. Furthermore, uncertainty quantification becomes more and more relevant in many fields of applications, see [37], for instance. A common approach is to replace uncertain parameters by random variables, see [36, 39]. Statistics of the stochastic model can be computed by sampling methods or quadrature rules. Alternatively, the stochastic Galerkin method changes the randomdependent linear dynamical system into a larger deterministic linear dynamical system.
The dimension of the stochastic Galerkin system becomes huge in the case of large numbers of random variables. Methods of model order reduction (MOR) are able to decrease the complexity. Transient solutions of a reduced system allow for an efficient numerical simulation. Several MOR methods are available for general linear dynamical systems, see [1, 3, 4, 32]. MOR of linear stochastic Galerkin systems was also examined in several previous works [9, 18, 25, 26, 31, 40]. MOR of nonlinear stochastic Galerkin systems was considered in [28].
However, even though the linear stochastic Galerkin system is asymptotically stable, the reduced Galerkin system often looses this stability in some MOR techniques. We investigate stabilitypreserving strategies in the case of Galerkintype projectionbased MOR like the Arnoldi method or proper orthogonal decomposition, for example. Galerkintype MOR can be applied to any linear dynamical system (not only stochastic Galerkin systems). A dissipativity property guarantees the preservation of stability. If a general linear dynamical system does not satisfy the dissipativity property, then it can be transformed into a dissipative structure, see [7, 23, 27]. The crucial part to identify a transformation consists in the solution of a Lyapunov equation. Direct methods, see [11], or approximate methods, see [21, 22, 34, 38], yield the numerical solutions of Lyapunov equations.
We examine the stabilitypreserving approach in the case of linear stochastic Galerkin systems consisting of ordinary differential equations. Several variants are feasible. The highdimensional Galerkinprojected system is transformed or, vice versa, the original systems are transformed followed by a Galerkin projection. We analyze the two strategies and another variant.
In addition, network approaches produce models consisting of differentialalgebraic equations in industrial applications. Thus we extend the stabilitypreserving techniques to this class of problems. The Lyapunov equations have no solution now. Therefore, we use a regularization technique, which was also employed in [19].
We apply the analyzed techniques to mathematical models of two test examples: a massspringdamper system and an electric circuit of a bandpass filter.
Stability preservation in reduction
We review a concept for stability preservation in Galerkintype projectionbased MOR for general linear dynamical systems.
Linear dynamical systems
We consider linear dynamical systems in the form
with constant matrices \(A,E \in \mathbb {R}^{n \times n}\), \(B \in \mathbb {R}^{n \times n_{\mathrm{in}}}\), and \(C \in \mathbb {R}^{n_{\mathrm{out}} \times n}\). The state variables or inner variables are \(x: [0,t_{\mathrm{end}}] \rightarrow \mathbb {R}^{n}\). Inputs \(u: [0,t_{\mathrm{end}}] \rightarrow \mathbb {R}^{n_{\mathrm{in}}}\) are supplied to the system. The outputs \(y: [0,t_{\mathrm{end}}] \rightarrow \mathbb {R}^{n_{\mathrm{out}}}\) are defined as quantities of interest (QoI). Initial value problems are given by \(x(0) = x_{0}\).
If the mass matrix E is nonsingular, then the system (1) consists of ordinary differential equations (ODEs). If the mass matrix E is singular, then the system (1) represents differentialalgebraic equations (DAEs). Furthermore, we assume that the system satisfies the following stability condition.
We specify some common notions.
Definition 1
Given a matrix pencil \((E,A)\) with \(E,A \in \mathbb {R}^{n \times n}\), the set of eigenvalues is \(\varLambda = \{ \lambda \in \mathbb {C}: \det ( \lambda E  A) = 0 \}\) and the spectral abscissa reads as \(\alpha (E,A) = \max \{ \operatorname{Re}(\lambda ) : \lambda \in \varLambda \}\). The spectral abscissa of a single matrix A is the spectral abscissa of the matrix pencil \((I,A)\) with the identity matrix \(I \in \mathbb {R}^{n \times n}\).
Definition 2
A matrix pencil \((E,A)\) with \(E,A \in \mathbb {R}^{n \times n}\) is called regular, if there is (at least one) \(\lambda \in \mathbb {C}\) such that \(\det ( \lambda E  A) \neq 0\).
Definition 3
A linear dynamical system (1) is called asymptotically stable, if the involved matrix pencil \((E,A)\) has a spectral abscissa satisfying \(\alpha (E,A) < 0\).
The asymptotic stability implies that the generalized eigenvalue problem has only a finite number of eigenvalues, which all exhibit a negative real part. We assume that the system (1) satisfies this stability condition. In the case of ODEs, the matrix pencil is always regular, even if the system is unstable. In the case of DAEs, the asymptotic stability implies that the matrix pencil \((E,A)\) is regular.
Transfer functions and Hardy norms
The inputoutput behavior of a linear dynamical system (1) can be described in the frequency domain, see [1] for ODEs and [6] for DAEs.
Definition 4
The transfer function of a linear dynamical system (1) is the mapping \(H : \mathbb {C}\setminus \varLambda \rightarrow \mathbb {C}^{n_{ \mathrm{out}} \times n_{\mathrm{in}}}\) with
where Λ represents the set of eigenvalues from Definition 1.
The transfer function is always a rational function, whose poles are the eigenvalues. The regularity of the matrix pencil guarantees a finite set of poles. The magnitude of a transfer function can be measured by Hardy norms, see [1].
Definition 5
The \(\mathcal {H}_{2}\)norm of a transfer function reads as
including the Frobenius matrix norm \(\ \cdot \_{\mathrm{F}}\), the angular frequency ω, and \(\mathrm{i} = \sqrt{1}\).
Since we assume asymptotically stable linear dynamical systems (1), the transfer function is defined on the complete imaginary axis. In the case of ODEs, the \(\mathcal {H}_{2}\)norm is always finite. In the case of DAEs, the existence of the \(\mathcal {H}_{2}\)norm is not guaranteed. Nevertheless, the \(\mathcal {H}_{2}\)norm is quite often finite for DAEs of index 1 or 2. The norm (3) may also exist for unstable systems, if there is no pole on the imaginary axis.
Projectionbased model order reduction
Projection matrices \(V,W \in \mathbb {R}^{n \times r}\) of full rank are specified with \(r \ll n\). Concerning the fullorder model (FOM) in (1), the reducedorder model (ROM) reads as
with state variables or inner variables \(\bar{x} : [0,t_{\mathrm{end}}] \rightarrow \mathbb {R}^{r}\). Initial values \(\bar{x}(0) = \bar{x}_{0}\) are supposed. We obtain the matrices via
which is also called a Petrov–Galerkintype MOR. A Galerkintype projectionbased MOR is characterized by \(W=V\), where just one projection matrix has to be determined. Important examples are the onesided Arnoldi method and the proper orthogonal decomposition (POD), see [1].
Each linear dynamical system is described by a transfer function (2) in the frequency domain. The difference between the transfer functions H of FOM and H̄ of ROM quantifies the error of the MOR. Hardy norms like the \(\mathcal {H}_{2}\)norm from Definition 5, for example, can be applied to their transfer functions. However, a small error in the frequency domain implies a small error in the time domain only if both systems are asymptotically stable. It holds that
with the maximum vector norm \(\ \cdot \_{\infty }\) and the \(\mathcal{L}^{2}[0,\infty )\)norm in time provided that all initial values are zero, see [2]. The \(\mathcal {H}_{2}\)norm of Definition 5 is a strong measure. Hence there are neither a priori error bounds nor cheap a posteriori error bounds available in MOR techniques. Just an approximation of the \(\mathcal {H}_{2}\)norm in (6) can be computed posterior, where the computational effort is dominated by evaluations of the transfer function in the FOM. The balanced truncation method yields an a priori error bound in the \(\mathcal {H}_{\infty }\)norm, see [1, p. 212].
Dissipative systems
In balanced truncation, see [1], the ROM (4) is always asymptotically stable provided that the system (1) is asymptotically stable. Yet the asymptotic stability may be lost in the ROM (4) within other MOR methods like Krylov subspace techniques, see [10], and POD, for example.
Using Galerkintype MOR, the stability is guaranteed for some classes of linear dynamical systems. In the case of ODEs, we define the following type of system.
Definition 6
A linear dynamical system (1) is called dissipative, if

1.
E is symmetric as well as positive definite, and

2.
\(A+A^{\top }\) is negative definite.
The above condition represents a dissipativity of the matrix A as shown in [20]. Other definitions of dissipative systems are used in the literature. We prove a property, which was shown for an explicit system of ODEs in [23].
Theorem 1
If the linear dynamical system (1) is dissipative with respect to Definition 6, then it is also asymptotically stable as in Definition 3.
Proof
Let \(E=LL^{\top }\) be the Cholesky decomposition of the mass matrix. The system (1) is equivalent to the explicit system of ODEs
with \(z = L^{\top }x\) and \(y = C L^{\top } z\). The symmetric part of the involved system matrix Ã reads as
It holds that \(v^{*} S v = (L^{\top }v)^{*} (A+A^{\top }) (L^{ \top }v)\) for any \(v \in \mathbb {C}^{n} \setminus \{ 0 \}\). Thus S is negative definite. Let \(\lambda \in \mathbb {C}\) be an eigenvalue of Ã and v be an associated eigenvector satisfying \(v^{*} v = 1\). It follows that
Thus the systems (7) and (1) are asymptotically stable. □
In contrast, the asymptotic stability does not imply the dissipativity of Definition 6, even if the mass matrix E is symmetric and positive definite. An example will be presented in Sect. 6.1. Now we consider Galerkintype MOR.
Theorem 2
If the linear dynamical system (1) is dissipative, then a Galerkintype MOR yields a dissipative reduced system (4). Hence the reduced system is asymptotically stable.
The proof can be found in [27], for example.
Transformations
The asymptotic stability of a linear dynamical system is invariant with respect to basis transformations. In contrast, if a system of ODEs is not dissipative, then it can be converted to an equivalent dissipative form by a basis transformation in the state space, see [23]. Alternatively, a basis transformation is feasible in the image space only, see [7]. We require a symmetric positive definite solution \(M \in \mathbb {R}^{n \times n}\) of the Lyapunov inequality
which means that the matrix on the lefthand side of (8) is negative definite. If the mass matrix E is nonsingular and the matrix pencil \((E,A)\) satisfies the Definition 3 of asymptotic stability, then an infinite set of solutions M exists.
We change the system of ODEs (1) into the equivalent system
The transformed system exhibits the desired dissipativity property.
Theorem 3
If the asymptotically stable linear dynamical system (1) has a nonsingular mass matrix E, then the transformed system (9) with M satisfying (8) is dissipative in view of Definition 6.
We use a Galerkintype MOR with a projection matrix V to the transformed system (9). This approach can be written as a Petrov–Galerkintype MOR applied to the original system (1) with matrices (5) and the projection matrix
Thus we do not require to calculate the transformed system (9) explicitly. Alternatively, we compute the projection matrix (10). However, the original Galerkintype MOR of the system (1) is not equivalent to the MOR of the system (9).
Numerical solution of Lyapunov inequality
We solve the Lyapunov inequality (8) using a Lyapunov equation
including a predetermined symmetric positive definite matrix \(F \in \mathbb {R}^{n \times n}\). This matrix represents a degree of freedom, because any choice yields a symmetric positive definite solution of the Lyapunov inequality. A simple admissible choice is the identity matrix \(I_{n} \in \mathbb {R}^{n \times n}\). Moreover, we do not need to solve the Lyapunov equation (11) with a high accuracy, because a rough approximation M̃ often still satisfies the Lyapunov inequality (8). In [29], it was shown that any approximation M̃ of the exact solution M of the Lyapunov equation (11) for \(F=I_{n}\) with the property
in some subordinate matrix norm also solves the Lyapunov inequality (8). The condition (12) is just sufficient and not necessary. However, approximations M̃ satisfying (12) may have a relative error of up to 100%, see [29], which motivates that rough estimates can solve the problem.
There are direct methods to compute a solution M of (11) or a symmetric decomposition \(M=L L^{\top }\), see [11, 21]. Their computational effort is typically \(\mathcal{O}(n^{3})\). In the highdimensional case, we have to use approximate methods to decrease the computation work. The following techniques are available:

(i)
projection methods (Krylov subspace techniques, POD, etc.), see [13, 38],

(ii)
alternating direction implicit (ADI) iteration, see [15, 22],
 (iii)
and others. In the cases (i) and (ii), the methods yield an approximation \(\widetilde{M} = Z Z^{\top }\) with a lowrank factor \(Z \in \mathbb {R}^{n \times k}\) (\(k \ll n\)). Thus the transformation is given by a singular matrix M̃. It follows that the mass matrix Ē of the reduced system (4) may become singular or illconditioned, as shown in [27]. In contrast, the method (iii) from [29] computes the projection matrix (10), where the underlying approximation M̃ is always nonsingular. However, the matrix M̃ is never computed but a matrixmatrix product with this approximation. In the frequency domain integral approach, the projection matrix V has to be determined by the original linear dynamical system (1).
Stochastic Galerkin systems
We illustrate the concept of the stochastic Galerkin method and define our problem under investigation.
Random linear dynamical systems
We include parameters in the linear dynamical systems and obtain
The matrices \(A,E \in \mathbb {R}^{n \times n}\), \(B \in \mathbb {R}^{n \times n_{ \mathrm{in}}}\), and \(C \in \mathbb {R}^{n_{\mathrm{out}} \times n}\) depend on parameters \(\mu \in \mathcal{M} \subseteq \mathbb {R}^{q}\). We assume that the dimension n is independent of the number of parameters q. The values μ may represent physical parameters or artificial parameters.
Thus the state variables or inner variables \(x: [0,t_{\mathrm{end}}] \times \mathcal{M} \rightarrow \mathbb {R}^{n}\) depend on time as well as the parameters. The inputs \(u: [0,t_{\mathrm{end}}] \rightarrow \mathbb {R}^{n _{\mathrm{in}}}\) are independent of the parameters, whereas the outputs \(y: [0,t_{\mathrm{end}}] \times \mathcal{M} \rightarrow \mathbb {R}^{n_{ \mathrm{out}}}\) vary with respect to the parameters. We consider a single output (\(n_{\mathrm{out}} = 1\)) without loss of generality.
We assume that the system (13) is either an ODE for all \(\mu \in \mathcal{M}\) or a DAE for all \(\mu \in \mathcal{M}\). Let the system be asymptotically stable for each parameter with respect to Definition 3. Furthermore, initial value problems
are considered including a function \(x_{0} : \mathcal{M} \rightarrow \mathbb {R}^{n}\). The initial values may be independent of the parameters. The initial values have to be consistent in the case of DAEs.
We suppose that the parameters in the linear dynamical system (13) are affected by uncertainties. A common approach is to substitute the parameters by independent random variables \(\mu : \varOmega \rightarrow \mathcal{M}\) on a probability space \((\varOmega ,\mathcal{F},P)\) with event space Ω, sigmaalgebra \(\mathcal{F}\) and probability measure P, see [36, 39]. We apply traditional probability distributions like uniform, Gaussian, beta, etc. Hence a joint probability density function \(\rho : \mathcal{M} \rightarrow \mathbb {R}\) is available. This approach yields a stochastic model.
A measurable function \(f: \mathcal{M} \rightarrow \mathbb {R}\) depending on the random variables exhibits the expected value
provided that the integral is finite. The expected value (15) implies the inner product
for two functions in the Hilbert space
The associated norm is \(\ f \_{\mathcal{L}^{2}(\mathcal{M} ,\rho )} = \sqrt{ \langle f , f \rangle }\) as usual.
Polynomial chaos expansions
In most cases, a complete orthogonal basis \((\varPhi _{i})_{i \in \mathbb {N}}\) of polynomials \(\varPhi _{i} : \mathcal{M} \rightarrow \mathbb {R}\) exists. These multivariate polynomials are the products
where \((\varPsi _{\ell }^{(j)})_{\ell \in \mathbb {N}_{0}}\) is the family of univariate orthogonal polynomials with respect to the jth random variable. The degree of \(\varPsi _{\ell }^{(j)}\) is exactly \(\ell \ge 0\). Let this basis also be normalized. Each traditional probability distribution implies its own family of orthogonal basis polynomials, see [39]. The basis is complete in the case of uniform, beta, and Gaussian distribution, for example. Yet the polynomials do not span the complete Hilbert space (17) in the case of a lognormal distribution, see [8].
We assume that the QoI of the system (13) is in the space (17) for each time point. If the parameter domain \(\mathcal{M}\) is compact, then the continuity of the QoI is sufficient for belonging to (17) pointwise in time. If the parameter domain \(\mathcal{M}\) is unbounded, then integrability conditions have to be satisfied with respect to the probability distribution. Consequently, the QoI can be expanded into the series
with coefficient functions \(w_{i} : [0,t_{\mathrm{end}}] \rightarrow \mathbb {R}\), which is called a (generalized) polynomial chaos expansion (PCE). The series (19) converges in the norm of (17) pointwise in time.
Likewise, the state variables exhibit the PCE
with coefficient functions \(v_{i} : [0,t_{\mathrm{end}}] \rightarrow \mathbb {R}^{n}\), provided that each state variable is in the Hilbert space (17).
We assume that the first basis polynomial is the unique constant polynomial \(\varPhi _{1} \equiv 1\). The orthonormality \(\langle \varPhi _{i} , \varPhi _{j} \rangle = \delta _{ij}\) of the basis functions imply the formulas
for the expected value and the variance of the QoI in each time point.
Stochastic Galerkin method
We truncate the series (19), (20) to a finite sum. Typically, all basis polynomials up to some total degree d are included. There is a bijective mapping between the integers i and the multiindices \(i_{1},\ldots ,i_{q}\) with the number q of random parameters. In view of (18), we obtain the finite index set \(\{ i \in \mathbb {N}: i_{1} + i_{2} + \cdots + i_{q1} + i _{q} \le d \}\). The cardinality of this index set is
If the number of random variables is large, then the number of basis polynomials becomes huge even for moderate degrees, say \(d = 3\). Figure 1 illustrates the growth of the number of basis polynomials.
Without loss of generality, the truncated series read as
Inserting the approximations (21) into the linear dynamical system (13) yields a residual. The Galerkin approach requires that the residual is orthogonal to the space spanned by the basis polynomials \(\varPhi _{1},\ldots ,\varPhi _{m}\) with respect to the inner product (16). Basic calculations produce a larger coupled linear dynamical system
for \(\hat{v} = (\hat{v}_{1}^{\top }, \ldots , \hat{v}_{m}^{\top })^{ \top }\) and \(\hat{w} = (\hat{w}_{1},\ldots ,\hat{w}_{m})^{\top }\). Hence we obtain a linear dynamical system of dimension \(\hat{n} = mn\) with m outputs. The matrices exhibit the sizes \(\hat{A},\hat{E} \in \mathbb {R}^{\hat{n} \times \hat{n}}\), \(\hat{B} \in \mathbb {R}^{\hat{n} \times n_{ \mathrm{in}}}\), and \(\hat{C} \in \mathbb {R}^{m \times \hat{n}}\). To define the matrices, we introduce the auxiliary arrays
It follows that
using Kronecker products. Therein, the probabilistic integration (15) operates separately in each component of the matrices. Often it holds that \(\mathrm{rank}(\hat{C}) = m\). More details on the stochastic Galerkin method for linear dynamical systems are given in [24, 31].
The mass matrix Ê may be singular even though all matrices E are nonsingular due to properties of the spectrum, see [35]. However, this loss of invertibility hardly occurs. Yet the mass matrix Ê may become illconditioned. We assume that Ê is always nonsingular in the case of ODEs (13). Likewise, the stochastic Galerkin system (22) may be unstable even though the systems (13) are asymptotically stable for all parameters. Academic examples are given in [30]. Again this loss of stability is hardly observed in practice. Furthermore, the passivity of stochastic Galerkin systems was investigated for models of electric circuits in [16].
Initial conditions \(\hat{v}(0)=\hat{v}_{0}\) are derived from the initial values (14) of the original dynamical system (13) by an own truncated PCE. If the initial values (14) are identical to zero, then the choice \(\hat{v}(0)=0\) is obvious. The approximation of the QoI reads as
where the outputs \(\hat{w}_{1},\ldots ,\hat{w}_{m}\) of the stochastic Galerkin system (22) yield the coefficients.
If the entries of the matrices are polynomials depending on μ in the system (13), then the matrices (24) of the stochastic Galerkin system can be calculated analytically. Consequently, we obtain the matrices exactly (except for roundoff errors) independent of the number q of random variables. In contrast, stochastic collocation techniques, which are nonintrusive methods, cf. [36, 39], induce a quadrature error or sampling error. This error typically grows for fixed numbers of collocation points and increasing dimensions q. Although the stochastic Galerkin approach represents an intrusive method, the effort of coding the algorithms is not extensive, because just constant matrices have to be specified for linear timeinvariant systems.
We prove a property, which will be used later.
Lemma 1
If the matrices \(E(\mu )\) are symmetric and positive definite for almost all \(\mu \in \mathcal{M}\), then the stochastic Galerkin projection Ê is also symmetric and positive definite.
Proof
The Galerkinprojected matrix consists of the blocks \(\hat{E}_{ij} = \mathbb{E}[\varPhi _{i} \varPhi _{j} E]\) for \(i,j=1,\ldots ,m\), see (24). Hence the symmetry is obvious. Let \(z = (z_{1}^{\top },\ldots ,z_{m}^{\top })^{\top }\in \mathbb {R}^{mn}\). We obtain
because the integrand is almost everywhere nonnegative in the probabilistic integration (15). The basis functions \((\varPhi _{i})_{i \in \mathbb {N}}\) are linearly independent. Thus \(z \neq 0\) implies that the above finite sum is nonzero on a subset \(\mathcal{U} \subset \mathcal{M}\) satisfying \(P(\{ \omega : \mu ( \omega ) \in \mathcal{U}\}) > 0\) for the probability measure P. It follows that \(z^{\top }\hat{E} z > 0\) for \(z \neq 0\). □
Likewise, this relation applies to the case of negative definite matrices.
Stability preservation
We investigate three strategies to preserve the asymptotic stability in an MOR of a stochastic Galerkin system. Figure 2 illustrates different possibilities.
Transformation of stochastic Galerkin system
This approach follows the steps (c), (b), (f) in Fig. 2. If the linear dynamical systems (13) are asymptotically stable for all parameters, then the stochastic Galerkin system (22) is usually asymptotically stable. However, if the system (22) is not dissipative, then stability may be lost in some MOR methods by step (e). In this case, we can transform the system to a dissipative form as demonstrated in Sect. 2.5 and Sect. 2.6. Consequently, the reduced system is stable due to Theorem 2. Remark that we do not have to calculate the transformed matrices of the highdimensional system explicitly. Just an appropriate projection matrix has to be determined via (10).
In this approach, the critical part is the solution of the highdimensional Lyapunov equation (11). A direct method of linear algebra would require \(\mathcal{O}(m^{3}n^{3})\) operations. Thus we are restricted to approximate methods or iteration schemes. The possibilities are listed in Sect. 2.6.
Transformation of parameterdependent system
Now the succession (a), (d), (f) is performed with respect to Fig. 2. Consequently, we transform the linear dynamical systems (13) first.
Transformation
The stochastic Galerkin system is not always asymptotically stable, even if all parameterdependent systems (13) are asymptotically stable. However, we obtain a positive result in the case of dissipativity.
Theorem 4
If the linear dynamical systems (13) are dissipative for almost all \(\mu \in \mathcal{M}\), then the stochastic Galerkin system (22) is also dissipative. Consequently, a Galerkintype reduction of (22) yields an asymptotically stable system.
Proof
Lemma 1 implies that the mass matrix Ê is symmetric and positive definite. The matrix \(\hat{A} + \hat{A}^{ \top }\) is the Galerkin projection of \(A(\cdot ) + A(\cdot )^{\top }\) due to
see (24). Since \(A(\mu )+A(\mu )^{\top }\) is negative definite for almost all \(\mu \in \mathcal{M}\), Lemma 1 shows that \(\hat{A} + \hat{A}^{\top }\) is negative definite. Hence both conditions of Definition 6 are satisfied. Theorem 2 guarantees the stability of a reduced system. □
If the original systems (13) are not dissipative for almost all \(\mu \in \mathcal{M}\), then we transform the systems appropriately. The following transformation has to be applied for almost all μ simultaneously (even if some system is already dissipative) to preserve continuity and smoothness of the functions.
We obtain the parameterdependent Lyapunov equation
for \(\mu \in \mathcal{M}\). Although the matrix F may depend on the parameters, we choose a constant matrix, because no parameteraware strategy with benefits is known yet. The Lyapunov equations (26) yield a unique family \(M(\mu )\) of symmetric positive definite matrices.
Furthermore, the stochastic Galerkin system (22) obtained by step (c) is not equivalent to a stochastic Galerkin system obtained by steps (a), (d). On the one hand, the stochastic Galerkin method is invariant with respect to transformations by constant nonsingular matrices. On the other hand, we use the parameterdependent matrix \(E( \mu )^{\top }M(\mu )\) in the transformation (a) to the dissipative form (9).
Polynomial system matrices
Now we assume that the matrices \(A(\mu )\), \(B(\mu )\), \(E(\mu )\) of the system (13) involve only polynomials in the variable μ, which is often given in practice. Thus the matrices Â, B̂, Ê of the stochastic Galerkin system (22) can be calculated analytically in the case of traditional probability distributions. We do not consider the output matrix \(C(\mu )\), because it is not transformed. However, the matrixvalued function \(M(\mu )\) satisfying (26) consists of rational functions in the variable μ. In the case of low dimensions n, the function \(M(\mu )\) can be calculated explicitly by a computer algebra software. Alternatively, the solution \(M(\mu )\) can be evaluated for a finite set of parameters μ.
We require the Galerkin projection of the transformed matrices
in the dissipative system (9). The entries of these transformed matrices are rational functions of μ. A quadrature rule yields numerical approximations Ã, B̃, Ẽ of the matrices \(\hat{A}'\), \(\hat{B}'\), \(\hat{C}'\). Let \(\{ \mu _{1},\ldots ,\mu _{k} \} \subset \mathcal{M}\) be the nodes and \(\{ \gamma _{1},\ldots , \gamma _{k} \} \subset \mathbb {R}\) be the weights. The approximation of the blocks in \(\hat{A}'\) reads as
for \(i,j=1,\ldots ,m\). Using the auxiliary array (23), the approximation becomes
Likewise, the quadrature yields B̃ and Ẽ. The computational effort is dominated by determining k numerical solutions of the Lyapunov equations (26).
Lemma 2
If all weights are positive, then a quadrature of kind (27) yields approximations Ã, Ẽ, where Ẽ is symmetric positive semidefinite and \(\tilde{A} + \tilde{A}^{\top }\) is negative semidefinite.
Proof
The symmetry of Ẽ is obvious. Let \(z \in \mathbb {R}^{mn} \setminus \{ 0 \}\) with \(z = (z_{1}^{\top },\ldots ,z_{m}^{\top })^{\top }\) and
We obtain using the notation (28)
Since \(E'(\mu _{\ell })\) is positive definite and the weights \(\gamma _{\ell }\) are positive for each ℓ, all terms of the sum are nonnegative. Hence Ẽ is positive semidefinite. The negative semidefiniteness of \(\tilde{A} + \tilde{A}^{\top }\) is concluded by the same treatment. □
In addition, it is very likely that one or more terms are positive in the above sum. Thus we assume that these matrices are definite. It follows that the approximate Galerkin system has the dissipativity condition of Definition 6.
Concerning the positivity of the weights, we address three classes of multivariate quadrature schemes or sampling methods, which can all be found in [36]:

(i)
Tensorproduct formulas: Univariate quadrature rules often exhibit exclusively positive weights like Gaussian quadrature, for example. The tensorproduct formulas inherit the positivity, because their weights are the products of the weights in the univariate case.

(ii)
Sparse grid quadrature: Often both positive and negative weights occur. Sparse grids with purely positive weights typically require more nodes for the same accuracy.

(iii)
(Quasi) MonteCarlo methods: In these sampling techniques, the weights are \(\gamma _{\ell }=\frac{1}{k}\) for all \(\ell =1,\ldots ,k\) and thus positive.
Properties
The strategy of this section looks appealing in the case of low dimensions n, because the Lyapunov equations can be solved cheap. However, the number of random parameters is typically large in this case, because otherwise the stochastic Galerkin system is not highdimensional and an MOR is obsolete. A large number of random variables implies a quadrature in a highdimensional space. Using a computationally feasible number of nodes, the quadrature error may still be too large such that the exact solution of the approximate Galerkin system yields poor approximations (25) of the randomdependent QoI. Therefore, the above approach is critical.
Furthermore, there is a major loss of sparsity in this technique. Let the entries in the matrices be polynomials of low degrees depending on μ in the system (13). Even if the matrices are dense, then the stochastic Galerkin system (22) exhibits sparse matrices due to the orthogonality of the basis polynomials. However, the dense matrices of the transformed systems (9) are rational functions depending on μ. Inner products (16) of their entries and orthogonal polynomials are nonzero and thus the matrices of (22) become dense in the Galerkin projection. In contrast to the approach of Sect. 4.1, we have to calculate the matrices of the alternative Galerkin system explicitly to determine a projection matrix V of the MOR.
Transformation using reference parameter
Again the steps (c), (b), (f) are considered in Fig. 2, where the transformation is done different from Sect. 4.1. We derive an additional technique to decrease the computational effort and to omit quadrature errors. A single reference parameter \(\mu ^{*} \in \mathcal{M}\) is selected like the mean value \(\mu ^{*} = \mathbb{E}[\mu ]\), for example. We directly solve the Lyapunov equation (26) only for \(\mu ^{*}\) using some matrix F. The solution \(M^{*} = M(\mu ^{*}) \in \mathbb {R}^{n \times n}\) is symmetric and positive definite. We define the larger transformation matrix
using the identity matrix \(I_{m} \in \mathbb {R}^{m \times m}\) and the Kronecker product. Obviously, the matrix (29) is symmetric and positive definite again. We employ this matrix to transform the stochastic Galerkin system (22) into the form (9). Since the matrix (29) is blockdiagonal, matrixmatrix multiplications with M̂ are cheap. We do not need to compute the transformed system matrices. Alternatively, we directly compute the projection matrix (10), where a projection matrix V is determined by the original Galerkin system.
Any random variables can be decomposed into the form \(\mu (\omega ) = \mu ^{*} + \Delta \mu (\omega )\). We consider the random variables
with the real parameter \(\theta \in [0,1]\).
Theorem 5
Let the random variables in a linear system of ODEs (13) be of the form (30). There is a positive constant \(\theta _{0} \in (0,1]\) such that the transformation of a stochastic Galerkin system (22) using the matrix (29) yields a dissipative system of type (9) for all \(\theta \in [0,\theta _{0}]\).
Proof
The transformed mass matrix \(\hat{E}^{\top }\hat{M} \hat{E}\) is symmetric and positive definite for all θ, since just the nonsingularity of the original mass matrix Ê is required. In the limit, we obtain
due to the orthogonality of the basis polynomials. Hence the matrix becomes blockdiagonal with identical blocks. The factor \(M^{*}\) satisfies the Lyapunov equation (26) for the chosen positive definite matrix F. The symmetric part of the matrix (31) is negative definite, because it holds that
The two conditions of Definition 6 are satisfied and thus the system is dissipative in the limit. Since the limit (31) is reached continuously with respect to the parameter θ, it follows that a sufficiently small perturbation of \(\theta =0\) still yields matrices with the required properties. □
Theorem 5 shows that the stochastic Galerkin system becomes dissipative for all sufficiently small \(\theta > 0\). However, this implication does not guarantee that the system is dissipative for \(\theta =1\), which reproduces our desired choice of random variables in (30). Nevertheless, the computational effort is low such that it is worth to try this approach. Even if this transformed stochastic Galerkin system is not dissipative, a loss of stability may happen less often in an MOR.
Nonlinear dynamical systems
We outline a stabilization concept for nonlinear dynamical systems. Let an autonomous system of ODEs be given in the form
with a parameterdependent mass matrix \(E \in \mathbb {R}^{n \times n}\) and a nonlinear smooth righthand side \(f : \mathbb {R}^{n} \times \mathcal{M} \rightarrow \mathbb {R}^{n}\). The QoI y is defined by the linear or nonlinear function \(g : \mathbb {R}^{n} \times \mathcal{M} \rightarrow \mathbb {R}^{n_{\mathrm{out}}}\). Let \(n_{\mathrm{out}} = 1\). We assume that a family of asymptotically stable stationary solutions \(x^{*} : \mathcal{M} \rightarrow \mathbb {R}^{n}\) exists, i.e., \(f(x^{*}(\mu ),\mu ) = 0\) for all \(\mu \in \mathcal{M} \). The definition of stable stationary solutions can be found in [33, p. 22]. As in [30], the system (32) is replaced by the equivalent system
which exhibits the constant asymptotically stable stationary solution \(\tilde{x}^{*} = 0\). Let \(\tilde{J}(\mu ) \in \mathbb {R}^{n \times n}\) be the Jacobian matrix of f̃ evaluated at \(\tilde{x}^{*}=0\) and \(\mu \in \mathcal{M}\). The stochastic Galerkin method yields a larger nonlinear system of ODEs
including the mass matrix Ê from (24). The definition of the nonlinear function \(\hat{f} : \mathbb {R}^{mn} \rightarrow \mathbb {R}^{mn}\) is given in [28, 30]. This function inherits the smoothness of f. Now \(\hat{v}^{*}=0\) is a stationary solution of (34). Although this stationary solution may be unstable, this loss of stability hardly occurs in practice. Thus we assume that the stationary solution is asymptotically stable again. Let \(\hat{J} \in \mathbb {R}^{mn \times mn}\) be the Jacobian matrix of f̂ evaluated at \(\hat{v}^{*}=0\). It follows that the spectral abscissa of the matrix pencil \((\hat{E},\hat{J})\) is negative, see Definition 1.
A Galekintype projectionbased MOR yields a small dynamical system
with a projection matrix V and the mass matrix \(\bar{E} = V^{\top } \hat{E} V\). The reduced system owns the stationary solution \(\bar{v}^{*} = 0\), which may be unstable.
Stabilitypreserving techniques can be derived. We consider the method of Sect. 4.1, where the highdimensional Lyapunov equation (11) is solved including the mass matrix Ê from (34) and \(A = \hat{J}\). The Galerkin system (34) is transformed and reduced to (35), while the asymptotic stability of the stationary solution is guaranteed as shown in [27, p. 35] for general nonlinear implicit systems of ODEs. Concerning the stabilitypreserving approach of Sect. 4.2, parameterdependent Lyapunov equations (26) are solved including \(E(\mu )\) from (32) and \(A(\mu ) = \tilde{J}(\mu )\) with the Jacobian matrix associated to the stationary solution of (33). This technique was used for explicit ODEs in [30]. The application of the stabilitypreserving method from Sect. 4.3 is straightforward now.
Differentialalgebraic equations
If the linear dynamical system (1) features a singular mass matrix E, then a system of DAEs is given. MOR methods are also available for DAEs, see [6]. Yet the Lyapunov equations (11) do not have a solution, which is crucial in our stabilitypreserving technique. Let \(x \in \mathbb {R}^{n}\). It follows that
Choosing an \(x \neq 0\) in the kernel of the matrix E implies \(Ex=0\) and thus a contradiction to the definiteness of the matrix F appears. We require an alternative strategy now.
Regularization
We apply a regularization of an asymptotically stable DAE system (1), which was also used in [19]. The asymptotic stability guarantees a regular matrix pencil as specified by Definition 2. The regularized system matrices read as
introducing parameters \(\alpha ,\beta >0\). The matrix \(E_{ \mathrm{reg}}\) is nonsingular for all \(\alpha > 0\). Choosing \(\alpha = \beta ^{2}\), it follows that the linear dynamical system (1) with \(A_{\mathrm{reg}}\), \(E_{\mathrm{reg}}\) is asymptotically stable for all sufficiently small parameters β, see [19]. An advantage of this regularization technique is that the sparsity pattern of \(s E  A\) coincides with the sparsity pattern of \(s E_{\mathrm{reg}}  A_{\mathrm{reg}}\) for each \(s \in \mathbb {C}\). Hence no loss of sparsity happens in the relevant operations.
We also choose a small parameter β (and \(\alpha = \beta ^{2}\)) to ensure that the difference between the original DAE system and the regularized ODE system is small. Assuming that the DAE system together with the defined outputs has a transfer function with finite \(\mathcal {H}_{2}\)norm (3), error bounds were derived with respect to this norm in [29].
Stochastic Galerkin projection
The steps (a)–(d) refer to the flowchart in Fig. 2. The following two approaches are equivalent provided that the same parameters α, β are chosen in a regularization (36):

(i)
Regularize the parameterdependent system (13) and then project the systems of ODEs in the stochastic Galerkin method.

(ii)
Project the parameterdependent DAE system (13) in the stochastic Galerkin method and then regularize the Galerkin system (22).
In both cases, the stochastic Galerkin projection (c) yields the same matrices Â and Ê for the system (22). The mass matrix Ê is nonsingular due to the regularization.
The transformations (a) and (b) to a dissipative representation are done only for a regularized system, since a system of ODEs is required for each Lyapunov equation. The transformation (b) involves the matrices from (i) or (ii). However, in the transformation (a), we have to regularize the systems of DAEs (13) immediately. Furthermore, the drawbacks of the succession (a), (d), see Sect. 4.2.3, also apply in this case.
In the transformation (b), the approximate solution of the highdimensional Lyapunov equations (11) may become critical. Small regularization parameters α nearly coincide with the singular case. This problem is less pronounced in direct methods to solve the Lyapunov equations.
We can also employ the technique from Sect. 4.3 in this context. The linear dynamical system (13) is regularized for a reference parameter \(\mu ^{*}\) and some \(\alpha , \beta >0\). Solving the Lyapunov equation (26) including \(\mu ^{*}\) yields the highdimensional transformation matrix (29). Now the regularized Galerkin system from strategy (ii), where the same values α, β are used, is transformed based on (29). Again the dissipativity of the transformed representation cannot be guaranteed a priori.
Illustrative examples
We investigate a system of ODEs as well as a system of DAEs in this section. We computed on a FUJITSU Esprimo P920 Intel(R) Core(TM) i54570 CPU with 3.20 GHz (4 cores) and operation system Microsoft Windows 7. The software package MATLAB [17] (version R2018a) was used for all computations.
Massspringdamper system
Figure 3 depicts a mechanical configuration, which consists of 5 masses, 7 springs and 5 dampers. The single input is the excitation at the bottom spring, whereas the single output is the position of the top mass. A mathematical modeling yields a linear dynamical system (13) of \(n=10\) ODEs of first order including \(q=17\) parameters. The system is asymptotically stable for all positive parameters. The system matrices are affinelinear functions (polynomials of degree one) of the parameters. The Bode plot of the system is shown for a specific choice of the parameters in Fig. 4. This system represents an extension of a test example used in [14], where 4 masses were included.
In the stochastic modeling, we replace the parameters by independent uniformly distributed random variables, which vary 10% around their mean values. The mean values are the constant parameters from above. In the truncated orthogonal expansion (21), we include all multivariate Legendre polynomials up to degree three. We obtain \(m=1140\) basis functions. The stochastic Galerkin system (22) is calculated exactly except for roundoff errors. Table 1 illustrates the properties of this example. The system is asymptotically stable. The mass matrix is symmetric and positive definite. Yet the system is not dissipative.
We employ the onesided Arnoldi method, see [1], to perform an MOR of the stochastic Galerkin system (22). The single real expansion point \(s=0.7\) is used in this Krylov subspace technique. The reduced systems are arranged for dimensions \(r=1, \ldots ,100\). It follows that just 38 ROMs are stable.
Now we investigate the stabilization techniques from Sect. 4. The following cases are discussed:

(i)
transformation of stochastic Galerkin system (Sect. 4.1),

(ii)
transformation of original systems (Sect. 4.2), and

(iii)
transformation based on reference parameter (Sect. 4.3).
In (i) and (iii), we reuse the projection matrix V determined for the original stochastic Galerkin system, because the Arnoldi method is invariant with respect to a basis transformation in the image space. In (ii), we repeat the Arnoldi algorithm for the alternative stochastic Galerkin system, because a smaller reduction error is expected.
We choose the identity matrix (\(F=I\)) as degree of freedom in each Lyapunov equation. In all three approaches, the stability is achieved for each ROM. Figure 5 illustrates the relative errors with respect to the \(\mathcal {H}_{2}\)norm (3), i.e.,
with the transfer functions H, H̄ of FOM and ROM, respectively.
In the technique (i), we use the method from [29], where an integral is discretized by a quadrature rule in the frequency domain. Therein, the solution of the highdimensional Lyapunov equation is not computed but the associated projection matrix (10). The computation work is characterized by an LUdecomposition of the system matrix \(\mathrm{i}\omega _{j}\hat{E}\hat{A}\) in each node \(\omega _{j}\) for the angular frequency ω. We employ the (univariate) GaussLegendre quadrature. Table 2 shows the number of stable ROMs for different numbers of nodes. We observe that 40 nodes are sufficient to stabilize all reduced systems. Figure 5(i) depicts the relative \(\mathcal {H}_{2}\)errors (37) in this case. The error of the MOR exhibits the same magnitude as in the original stochastic Galerkin system.
In the technique (ii), we use a sparse grid quadrature of Smolyaktype with level 3 based on the ClenshawCurtis rule. The scheme exhibits 7209 nodes in the 17dimensional space and negative weights arise. Table 3 illustrates the loss of sparsity in this approach. Again all ROMs become asymptotically stable. The relative errors (37) of the MOR are demonstrated in Fig. 5(ii). The errors between the ROMs and the stochastic Galerkin projection of the transformed systems, which we call the inherent errors, decay for increasing dimensions. However, the errors between these ROMs and the stochastic Galerkin system of the untransformed systems (13), which we call the comparison, are large and stagnate. This property indicates that the quadrature error is too large, even though a high number of nodes is used, because the original Galerkin system can be considered sufficiently accurate.
In the technique (iii), we define the mean value of the random variables as reference parameter. We calculate the matrices of the transformed Galerkin system to analyze the definiteness. Figure 6 depicts the maximum eigenvalue of the symmetric part of \(\hat{E}^{\top }\hat{M} \hat{A}\) with M̂ from (29) in dependence on the parameter θ from (30). We observe that the negative definiteness is lost for \(\theta \ge \theta _{0} \approx 0.04\). Although our relevant case \(\theta = 1\) is not dissipative, the stability is preserved in all ROMs. Moreover, the error of the MOR is not compromised.
We also measured computation times illustrated by Table 4. The ROMs of dimension \(r=100\) are determined using the Arnoldi method and the three stabilitypreserving techniques. The computation time of a stabilization technique consists of all steps to calculate the reduced matrices (5) except for the determination of the projection matrix V, which is done by the Arnoldi method. In the technique (i), \(k=40\) nodes are used in the Gaussian quadrature. On the one hand, the technique (ii) causes a relatively large computational effort due to the high number of nodes in the sparse grid quadrature. Also the Arnoldi algorithm is more expensive due to the loss of sparsity in the matrices. On the other hand, the technique (iii) requires a negligible computation work, which shows that it is worth trying this approach.
Bandpass filter
We examine the electric circuit of a bandpass filter shown in Fig. 7. A single input voltage is supplied and a single output voltage drops at a load conductance. Modified nodal analysis [12] yields a linear system of DAEs of dimension \(n=23\). This DAE system exhibits the index one. The physical parameters are 7 capacitances, 7 inductances and 9 conductances (\(q=23\)). Figure 8 depicts the Bode plot of the system for a constant selection of the parameters. Furthermore, the system matrices are affinelinear functions of the parameters.
In the stochastic modeling, we introduce independent uniformly distributed random variables varying 20% around their mean values. The truncated series (21) include all basis polynomials up to degree two, i.e., \(m=300\). The stochastic Galerkin system (22) consists of linear DAEs, which have a finite \(\mathcal {H}_{2}\)norm (3) for the defined outputs.
In the regularization (36), we choose the parameters \(\alpha = 10^{10}\) and \(\beta = 10^{5}\). Table 5 shows the properties of the regularized system. The system is asymptotically stable. Both Ê and \(\hat{E}_{\mathrm{reg}}\) are not symmetric. Thus the spectral abscissa, see Definition 1, of the symmetric part of \(\hat{A}_{\mathrm{reg}}\) is irrelevant. Multiplication by \(\hat{E}_{\mathrm{reg}}^{\top }\) from the left yields a system with symmetric positive definite mass matrix, which is still not dissipative.
The onesided Arnoldi method yields the projection matrices of the MOR for dimensions \(r=1,\ldots ,100\). Therein, we choose the real number \(s=10^{6}\) as expansion point. A loss of stability occurs in the reduction of both the DAE system and the regularized system, as shown in Table 6. We apply the stabilization techniques of Sect. 4.1 and Sect. 4.3 to the regularized system. The Lyapunov equations are solved using the identity matrix as input matrix. We solve the Lyapunov equations by a direct method of linear algebra. Thus a critical behavior for small regularization parameters, as mentioned in Sect. 5.2, is avoided. The matrices of a transformed Galerkin system are never computed explicitly. All ROMs become asymptotically stable in both approaches.
Finally, we compute approximations of the \(\mathcal {H}_{2}\)norms for the difference between the DAE system and the reduced systems. Figure 9 illustrates the relative errors (37) of the reductions. We recognize that the errors are nearly identical in the two stabilization techniques. The errors stagnate for reduced dimensions \(r>80\) in (ii)–(iv), because the total error is dominated by the error of the regularization in this part. Most important, the stabilization approaches do not compromise the error of the MOR.
Conclusions
We examined stabilitypreserving model order reduction of linear stochastic Galerkin systems using transformations to dissipative forms. Three approaches were analyzed. The transformation of the conventional Galerkin system represents an adequate method. The Galerkin projection of the transformed original systems features severe drawbacks in the case of large numbers of random parameters. The approach using a cheap transformation matrix based on a reference parameter is promising. Although the dissipativity property cannot be guaranteed in the transformed system, the numerical results of test examples demonstrate that preservation of stability is achieved. Moreover, the error of the model order reduction does not increase in this stabilization.
Abbreviations
 DAE:

differentialalgebraic equation
 FOM:

fullorder model
 MOR:

model order reduction
 ODE:

ordinary differential equation
 PCE:

polynomial chaos expansion
 POD:

proper orthogonal decomposition
 QoI:

quantity of interest
 ROM:

reducedorder model
 SG:

stochastic Galerkin
References
 1.
Antoulas A. Approximation of largescale dynamical systems. Philadelphia: SIAM; 2005.
 2.
Benner P, Gugercin S, Willcox K. A survey of projectionbased model order reduction methods for parametric dynamical systems. SIAM Rev. 2015;57:483–531.
 3.
Benner P, Hinze M, ter Maten EJW, editors. Model reduction for circuit simulation. Lect. notes in electr. engng. vol. 74. Berlin: Springer; 2011.
 4.
Benner P, Mehrmann V, Sorensen DC, editors. Dimension reduction of largescale systems. Lect. notes in comput. sci. eng. vol. 45. Berlin: Springer; 2005.
 5.
Benner P, Schneider A. Balanced truncation model order reduction for LTI systems with many inputs or outputs. In: Edelmayer A, editor. Proceedings 19th international symposium on mathematical theory of networks and systems. 2010. p. 1971–4.
 6.
Benner P, Stykel T. Model order reduction of differentialalgebraic equations: a survey. In: Ilchmann A, Reis T, editors. Surveys in differentialalgebraic equations IV, differentialalgebraic equations forum. Berlin: Springer; 2017. p. 107–60.
 7.
Castañé Selga R, Lohmann B, Eid R. Stability preservation in projectionbased model order reduction of large scale systems. Eur J Control. 2012;18:122–32.
 8.
Ernst OG, Mugler A, Starkloff HJ, Ullmann E. On the convergence of generalized polynomial chaos expansions. ESAIM: M2AN. 2012;46:317–39.
 9.
Freitas FD, Pulch R, Rommes J. Fast and accurate model reduction for spectral methods in uncertainty quantification. Int J Uncertain Quantificat. 2016;6:271–86.
 10.
Freund R. Model reduction methods based on Krylov subspaces. Acta Numer. 2003;12:267–319.
 11.
Hammarling SJ. Numerical solution of stable nonnegative definite Lyapunov equation. IMA J Numer Anal. 1982;2:303–23.
 12.
Ho CW, Ruehli A, Brennan P. The modified nodal approach to network analysis. IEEE Trans Circuits Syst. 1975;22:504–9.
 13.
Kramer B, Singler JR. A POD projection method for largescale algebraic Riccati equations. Numer Algebra Control Optim. 2016;6:413–35.
 14.
Lohmann B, Eid R. Efficient order reduction of parametric and nonlinear models by superposition of locally reduced models. In: Roppenecker G, Lohmann B, editors. Methoden und Anwendungen der Regelungstechnik. Aachen: Shaker; 2009.
 15.
Lu A, Wachspress EL. Solution of Lyapunov equations by alternating direction implicit iteration. Comput Math Appl. 1991;21:43–58.
 16.
Manfredi P, Vande Ginste D, De Zutter D, Canavero FG. On the passivity of polynomial chaosbased augmented models for stochastic circuits. IEEE Trans Circuits Syst I, Regul Pap. 2013;60:2998–3007.
 17.
MATLAB, version 9.4.0.813654 (R2018a). Natick, Massachusetts: The Mathworks Inc.; 2018.
 18.
Mi N, Tan SXD, Liu P, Cui J, Cai Y, Hong X. Stochastic extended Krylov subspace method for variational analysis of onchip power grid networks. In: Proc. ICCAD; 2007. p. 48–53.
 19.
Müller PC. Modified Lyapunov equations for LTI descriptor systems. J Braz Soc Mech Sci Eng. 2006;28:448–52.
 20.
Panzer H, Wolf T, Lohmann B. A strictly dissipative state space representation of second order systems. Automatisierungstechnik. 2012;60:392–6.
 21.
Penzl T. Numerical solution of generalized Lyapunov equations. Adv Comput Math. 1998;8:33–48.
 22.
Penzl T. A cyclic lowrank Smith method for large sparse Lyapunov equations. SIAM J Sci Comput. 2000;21:1401–18.
 23.
Prajna S. POD model reduction with stability guarantee. In: Proceedings of 42nd IEEE conference on decision and control, Maui, Hawaii, USA. 2003. p. 5254–8.
 24.
Pulch R. Stochastic collocation and stochastic Galerkin methods for linear differential algebraic equations. J Comput Appl Math. 2014;262:281–91.
 25.
Pulch R. Model order reduction for stochastic expansions of electric circuits. In: Bartel A, Clemens M, Günther M, ter Maten EJW, editors. Scientific computing in electrical engineering SCEE 2014. Mathematics in industry. vol. 23. Berlin: Springer; 2016. p. 223–32.
 26.
Pulch R. Model order reduction and lowdimensional representations for random linear dynamical systems. Math Comput Simul. 2017;144:1–20.
 27.
Pulch R. Stability preservation in Galerkintype projectionbased model order reduction. Numer Algebra Control Optim. 2019;9:23–44.
 28.
Pulch R. Model order reduction for random nonlinear dynamical systems and lowdimensional representations for their quantities of interest. Math Comput Simul. 2019;166:76–92.
 29.
Pulch R. Frequency domain integrals for stability preservation in Galerkintype projectionbased model order reduction. 2018. arXiv:1808.04119.
 30.
Pulch R, Augustin F. Stability preservation in stochastic Galerkin projections of dynamical systems. SIAM/ASA J Uncertain Quantificat. 2019;7:634–51.
 31.
Pulch R, ter Maten EJW. Stochastic Galerkin methods and model order reduction for linear dynamical systems. Int J Uncertain Quantificat. 2015;5:255–73.
 32.
Schilders WHA, van der Vorst MA, Rommes J, editors. Model order reduction: theory, research aspects and applications. Mathematics in industry. vol. 13. Berlin: Springer; 2008.
 33.
Seydel R. Practical bifurcation and stability analysis. 3rd ed. Berlin: Springer; 2010.
 34.
Son NT, Stykel T. Solving parameterdependent Lyapunov equations using the reduced basis method with application to parametric model order reduction. SIAM J Matrix Anal Appl. 2017;38:478–504.
 35.
Sonday B, Berry R, Debusschere B, Najm H. Eigenvalues of the Jacobian of a Galerkinprojected uncertain ODE system. SIAM J Sci Comput. 2011;33:1212–33.
 36.
Sullivan TJ. Introduction to uncertainty quantification. Berlin: Springer; 2015.
 37.
ter Maten EJW, et al.. Nanoelectronic coupled problems solutions—nanoCOPS: modelling, multirate, model order reduction, uncertainty quantification, fast fault simulation. J Math Ind. 2016;7:2.
 38.
Wolf T, Panzer H, Lohmann B. Model order reduction by approximate balanced truncation: a unifying framework. Automatisierungstechnik. 2013;61:545–56.
 39.
Xiu D. Numerical methods for stochastic computations: a spectral method approach. Princeton: Princeton University Press; 2010.
 40.
Zou Y, Cai Y, Zhou Q, Hong X, Tan SXD, Kang L. Practical implementation of the stochastic parameterized model order reduction via Hermite polynomial chaos. In: Proc. ASPDAC; 2007. p. 367–72.
Acknowledgements
Not applicable.
Availability of data and materials
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
Funding
Not applicable.
Author information
Affiliations
Contributions
Sole author. The author read and approved the final manuscript.
Corresponding author
Correspondence to Roland Pulch.
Ethics declarations
Competing interests
The author declares that he has no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Received
Accepted
Published
DOI
MSC
 65L05
 65L20
 65L80
 34C20
 34D20
Keywords
 Linear dynamical system
 Polynomial chaos
 Stochastic Galerkin method
 Model order reduction
 Asymptotic stability
 Lyapunov equation