Skip to main content

Reduced order multirate schemes in industrial circuit simulation

Abstract

In this paper the industrial application of Reduced Order Multirate (ROMR) schemes is presented. This paper contains the mathematical foundations of the ROMR schemes and elaborates on the construction of these schemes using specific Model Order Reduction (MOR) techniques. Especially the Maximum Entropy Snapshot Sampling method for generating a reduced basis and reduction by Gauß–Newton with Approximated Tensors (GNAT). This basis generation method is also used for generating the basis for the gappy hyper-reduction method used for nonlinear function evaluation. For the multirate integration part, a Backward Differentation Formula approach to integration is used in conjunction with a coupled-slowest-first multirate approach. After introducing the numerical approach to industrial circuit simulation validation experiments are performed. First a simple academic model is used, and then an industrial test case is simulated as presented by STMicroelectronics. A significant speedup in simulation time is achieved whilst accuracy and convergence is kept.

1 Introduction

In integrated circuit design, there are a significant number of design possibilities under which the internal components need to be guaranteed to work. This leads to a whole range of explorations to ensure sound functionality of the design. These explorations are performed by numerical simulations of the circuits mathematical model. Due to the ever increasing number of components, and thus the degrees of freedom in the model, the required simulation times may become prohibitively large.

Besides the sheer number of components inside the integrated circuit, a large contribution to the complexity of the mathematical model originates form the method of deriving these equations. As these models grow, generating a state-space model with a minimal set of unknowns cannot be generated in an automatic way. Therefore, the mathematical models have to be derived through use of algorithmic analysis. This automation comes at a cost. The resulting system of differential-algebraic equations (DAE) is numerically harder to solve, and may contain redundant network variables.

To decrease these ever increasing simulation costs a multitude of different approaches have been proposed in the past decades. For instance, the large original system can be partitioned into subsystems which each have their own characteristic rate of evolution through time. This property is capitalised upon by using multirate (MR) time integration. Another approach is to exploit redundancy in the mathematical model originating from the network analysis by using model order reduction. This technique aims to solve a model of reduced size that still approximates the solution of the original model. With the main driver being that such large coupled systems eligible for both model order reduction (MOR) and MR techniques are encountered more often in industry, for instance by STMicroelectronics. Thus is this paper specifically aimed at the combination of the two previously mentioned techniques, and provides a review of the definition, with new integration approaches.

In Sect. 2, the continuous DAE problem is introduced and is discretized to be solved numerically. The backward differentiation formula as numerical method is covered and the notion of multirate time-integration, [9], is introduced.

The model order reduction techniques considered in this paper are outlined in Sect. 3. The specific type of model order reduction we discuss is nonlinear model order reduction by reduced basis approaches. Specifically the Gauß–Newton with approximated tensors method [6]. This robust model order reduction method is a nonlinear Petrov–Galerkin projection method equipped with a function-sampling hyper-reduction scheme. It operates at the level of the nonlinear arising at each time step, after discretization. In the last part of this section the combination of multirate and model order reduction techniques is described.

In the penultimate section numerical experiments are performed on both academic and industrial test-cases. The academic test case is used to verify the implementation of the circuit parser, and check the validity of the numerical scheme. Then, the implemented circuit parser and simulator is applied to an industrial test case as provided by STMicroelectronics. Finally, the last section offers conclusions and provides an outlook.

2 Problem formulation

Since the set of equations used to describe the electrical circuits is constructed according to the topological structure of the network. This often results in a coupled system of implicit differential and nonlinear equations, or more general a system of differential-algebraic equations (DAEs)

$$ f(\dot{x},x,t) = 0\quad \text{with } \det{ \frac{\partial f}{\partial \dot{x}}}\equiv 0. $$
(2.1)

This system may represent ill-posed problems and is in general more difficult to solve numerically than the more standard systems of ordinary differential equations (ODEs).

To solve these systems numerically we start from the consistent initial values. Then the time domain is discretised into time points \(t_{0}, t_{1}, \ldots, t_{N}\), and the solution for each of these time points is approximated by an implicit linear numerical integration formula. A direct approach, as proposed in [13], is by applying backward differentiation formula (BDF) method. In this paper we restrict ourselves to using a first order BDF, but the idea can be generalised for higher order methods taking some considerations into account about the smoothness. This multistep method is applied to a DAE system by using the ϵ-embedding method.

However, we want to solve system (2.1), which can be an implicit differential algebraic system. Therefore, the multistep system for an implicit DAE system, \(M\dot{x} = f(x)\), is given by

$$ M\sum_{i=0}^{k} \alpha _{i} x_{n+i} = h\sum_{i=0}^{k} \beta _{i} f(x_{n+i}). $$
(2.2)

In general form, applying Equation (2.2) to an implicit nonlinear system of DAEs at time step \(t_{n}\) yields

$$ f\Biggl(\frac{1}{h}\sum_{i=0}^{k} \frac{\alpha _{i}}{\beta _{i}} x_{n+i},x_{n},t_{n} \Biggr) = 0. $$
(2.3)

This gives that the numerical solution of the system is thus reduced to the solution of the system of nonlinear Equations (2.3). This system is solved iteratively for \(x_{n}\) by applying Newton’s method.

To set up the system of DAEs defined by Equation (2.2), Modified Nodal Analysis (MNA), [8], is used to obtain network equations. The Kirchoff Current Law is applied to each node, except the node that is considered the ground node. The incidence matrix is then defined as a collection of incidence matrices related to each different type of element,

$$ A = \{A_{R},A_{L},A_{C},A_{V},A_{I} \}, $$

with \(A_{\Omega }\in \{0,+1,-1\}^{n_{u}\times n_{\Omega}}\), where \(n_{\Omega}\) is the cardinality the set of each type of network element. Using these incidence matrices, we can relate the branch voltages in a loop and the currents accumulating in a node by applying both the Kirchoff Current Law and the Kirchoff Voltage Law to each node, resulting in

$$\begin{aligned}& A_{C}\dot{q} + A_{R}\mathcal{R}\bigl(A_{R}^{\top }u,t \bigr) + A_{L}\jmath _{L} +A_{V} \jmath _{V} + A_{I} i(t) = 0, \end{aligned}$$
(2.4a)
$$\begin{aligned}& \dot{\phi}-A_{L}^{\top }u =0, \end{aligned}$$
(2.4b)
$$\begin{aligned}& v(t) - A_{V}^{\top }u = 0, \end{aligned}$$
(2.4c)
$$\begin{aligned}& q - q_{C}\bigl(A_{C}^{\top }u\bigr) =0, \end{aligned}$$
(2.4d)
$$\begin{aligned}& \phi - \phi _{L}(\jmath _{L}) =0. \end{aligned}$$
(2.4e)

The unknowns q, ϕ, u, \(\jmath _{L}\) and \(\jmath _{V}\) are the charges, fluxes, node voltages, inductor currents and voltage source currents, respectively. All these quantities are time dependent, and are combined into one state vector \(x(t)\in \mathbb{R}^{m}\) of unknowns. The dimension of which is given by the cumulative dimensions of the quantities. The network equations can now be stated in compact form

$$ \frac{d}{dt}q\bigl(x(t)\bigr)+j\bigl(x(t)\bigr) = 0, $$
(2.5)

where q and j are mappings from \(\mathbb{R}^{m}\) to \(\mathbb{R}^{m}\) related to the network elements.

The previously seen numerical integration method is considered a singlerate time-integration method, as it integrates each part of the equation with the same step-size. Opposed to this classical approach there are the multirate time-integration methods, which use a different step-size, or even integration method, for parts of the equations with different dynamical behaviour.

First, it is necessary for a multirate method to partition the variables of the dynamical system into sets with different temporal characteristic. This partition can be made either by manual selection, or automatically. For now, we partition the full system into a fast and a slow partition. This can be extended to a partitioning into k subsystems, but for convenience, a two subsystem partitioning, i.e. a fast and slow subsystem, is used as example here.

Using this partitioning on the compact form, Equation (2.5), let \(B_{\mathrm{F}} \in \{0,1\}^{m_{\mathrm{F}} \times M}\) and \(B_{\mathrm{S}} \in \{0,1\}^{m_{\mathrm{S}} \times M}\), be selection operators where \(m_{\mathrm{F}} + m_{\mathrm{S}} = M\) and the following orthogonal properties: \(B_{\mathrm{F}}B_{\mathrm{F}}^{\top }= I_{\mathrm{F}}\), \(B_{\mathrm{S}} B_{\mathrm{S}}^{\top }= I_{\mathrm{S}}\) and \(B_{\mathrm{F}}B_{\mathrm{S}}^{\top }= B_{\mathrm{S}}B_{\mathrm{F}}^{ \top }= 0\), where \(I_{\{\mathrm{F},\mathrm{S}\}}\) is the identity matrix with the respective subsystems dimension \(m_{\mathrm{F}}\) or \(m_{\mathrm{S}}\). Then the variables and functions of each subsystem can be split into parts \(x_{\mathrm{F}}(t)\in \mathbb{R}^{m_{\mathrm{F}}}\), \(x_{\mathrm{S}}(t)\in \mathbb{R}^{m_{\mathrm{S}}}\). Applying this partition to the network equations (2.5) results in the following systems

$$\begin{aligned}& 0 = \frac{d}{dt}\bigl(q_{\mathrm{F}}(t, B_{\mathrm{F}} x, B_{\mathrm{S}}x)\bigr) + j_{\mathrm{F}}(t, B_{\mathrm{F}} x, B_{\mathrm{S}} x), \end{aligned}$$
(2.6a)
$$\begin{aligned}& 0 = \frac{d}{dt}\bigl(q_{\mathrm{S}}(t, B_{\mathrm{F}} x, B_{\mathrm{S}}x)\bigr) + j_{\mathrm{S}}(t, B_{\mathrm{F}} x, B_{\mathrm{S}} x), \end{aligned}$$
(2.6b)

where we have the following definitions

$$\begin{aligned}& x = B_{\mathrm{F}}^{\top }x_{\mathrm{F}} + B_{\mathrm{S}}^{\top }x_{ \mathrm{S}}, \end{aligned}$$
(2.7a)
$$\begin{aligned}& q(t,x) = B_{\mathrm{F}}^{\top }q_{\mathrm{F}}(t, B_{\mathrm{F}} x, B_{ \mathrm{S}}x)) + B_{\mathrm{S}}^{\top }q_{\mathrm{S}}(t, B_{ \mathrm{F}} x, B_{\mathrm{S}}x) \end{aligned}$$
(2.7b)
$$\begin{aligned}& j(t,x) = B_{\mathrm{F}}^{\top }j_{\mathrm{F}}(t, B_{\mathrm{F}} x, B_{ \mathrm{S}}x)) + B_{\mathrm{S}}^{\top }j_{\mathrm{S}}(t, B_{ \mathrm{F}} x, B_{\mathrm{S}}x)), \end{aligned}$$
(2.7c)

To apply multirate time-integration methods to DAEs it is assumed that all the subsystems in itself are well posed, which means that the DAE-index should be less than or equal to that of the full DAE. As we are partitioning network equations of circuit models, it is known that they are composed of sub-circuits or natural phenomena in a hierarchical way. If we partition based on this hierarchy, we can check easily if the subsystems fulfil the index requirements and obtain viable partitions.

As there are several different approaches available to implement multirate time-integration methods, we specify in this subsection which approach is used. As has been shown in [1] and [2] a feasible method is given by the coupled-slowest-first integration approach coupled with the implicit Euler method specified by a first order BDF method.

First the whole system is solved for the macro-step, \(t_{n} \to t_{n+1} = t_{n} +H\),

$$ f\Biggl(\frac{1}{h}\sum_{i=0}^{k} \frac{\alpha _{i}}{\beta _{i}} x_{n+i},x_{n},t_{n} \Biggr) = 0. $$
(2.8)

However, the fast components, \(x_{\mathrm{F}}\) from the full state solution x are inaccurate, thus they are discarded. Then by using interpolated values for the slow system between \(t_{n}\) and \(t_{n+1}\), the following system is solved for the fast solutions. Here we have that \(H = h\cdot f_{\mathrm{mr}}\) where \(f_{\mathrm{mr}}\) is known as the multirate factor.

$$\begin{aligned}& 0 = q_{\mathrm{F}}\Biggl(t_{n+(l+k)/m},\frac{1}{h}\sum _{i=0}^{k} \frac{\alpha _{i}}{\beta _{i}} x_{\mathrm{F},n+(l+i)/m}, \frac{1}{h} \sum_{i=0}^{k} \frac{\alpha _{i}}{\beta _{i}}\bar{x}_{\mathrm{S},n+(l+i)/m}\Biggr), \end{aligned}$$
(2.9)
$$\begin{aligned}& - j_{\mathrm{F}}(t_{n+(l+k)/m},x_{\mathrm{F},n+(l+k)/m}, \bar{x}_{ \mathrm{F},n+(l+k)/m}). \end{aligned}$$
(2.10)

For stability reasons, the interpolated values \(\bar{x}_{\mathrm{S},n+(l+i)/m}\) are obtained by constant interpolation based on \(x_{\mathrm{S},{n+k}}\), then the coupled-slowest-first Euler approach is unconditionally A-stable.

3 Nonlinear model order reduction

Due to the differential-algebraic structure of the generated network equations, standard techniques used for ODE model order reduction, such as a direct reduction through a Galerkin projection scheme, may yield unsolvable reduced order models.

To circumvent this and preserve the solvability of the numerical model, the Gauß–Newton with Approximated Tensors (GNAT) model order reduction approach is utilized, [6]. This is combined with a gappy data reconstruction method, [14], for hyper-reduction.

These reduced basis model order reduction approaches utilise a basis obtained by the Maximum Entropy Snapshot Sampling (MESS) method, [10], as proposed in [2]. To illustrate this approach, this section contains an overview of the reduction, hyper-reduction and basis-construction methods.

3.1 Gauß–Newton with approximated tensors

As GNAT operates on a discrete level, we assume that Equation (2.1) is solved by a linear implicit time integrator, see the next section for a more detailed description. Then for \(n_{t}\) time steps, a sequence of \(n_{t}\) systems of nonlinear equations are to be solved, with each system defined at time step

$$ R(x) = 0, $$
(3.1)

where \(x\in \mathbb{R}^{M}\) and the residual mapping \(R:\mathbb{R}^{M} \rightarrow \mathbb{R}^{M}\). For ease of notation, the time dependencies have been omitted as we consider only one time instance and one input vector.

The GNAT approach for reduction is to use a projection to search the approximated solution in the incremental affine trial subspace \(x^{0} + \mathcal{V}\subset \mathbb{R}^{M}\), with the initial condition \(x^{0} \in \mathbb{R}^{M}\). The incremental subspace is used for consistent Petrov–Galerkin projections, Appendix A [6]. This reduced state vector is then given by

$$ \tilde{x} = x^{0} + V_{x}x_{\mathrm{r}}, $$
(3.2)

where \(V_{x}\in \mathbb{R}^{M\times r}\) is the r-dimensional projection basis for \(\mathcal{V}\), which is not yet defined, and \(x_{\mathrm{r}}\) denotes the reduced incremental vector of the state vector. Substituting Equation (3.2) into Equation (3.1), results in an overdetermined system of M equations and r unknowns. Since \(V_{x}\) is a matrix with full column rank, it is possible to solve this system by a minimisation in least-squares sense through

$$ \min_{\tilde{x}\in u^{0}+\mathcal{V}} \bigl\Vert R(\tilde{x}) \bigr\Vert _{2}. $$
(3.3)

This approach by residual minimisation is equivalent to performing a Petrov–Galerkin projection where te test basis is given by \(\frac{\partial R}{\partial x}(V_{x})\), [5]. This nonlinear least-squares problem is solved by the Gauß–Newton method, leading to then iterative process for \(k= 1,\ldots,K\), solving

$$ s^{k} = \min_{a\in \mathbb{R}^{M}} \bigl\Vert J^{k}V_{x}a+R^{k} \bigr\Vert _{2}, $$
(3.4)

and updating the search value \(w^{k}_{r}\) with

$$ w_{r}^{k+1} = w^{k}_{r} + s^{k}, $$
(3.5)

where K is defined through a convergence criterion, initial guess \(w^{0}_{r}\), \(R^{k} \equiv R(x^{0} + V_{x}w_{r}^{k})\) and \(J^{k}\equiv \frac{\partial R}{\partial x}(x^{0},V_{x} x_{r}^{k})\). Here \(J^{k}\) is the full order Jacobian of the residual at each iteration step k. Since the computation of this Jacobian, which is used in circuit simulation to solve for the next step, scales with the original full dimension of Equation (3.1) this is a computational bottleneck. This bottleneck can be circumvented by the application of hyper reduction methods, for which this paper utilises a gappy data reconstruction method.

3.2 Hyper-reduction by gappy data reconstruction

The evaluation of the nonlinear function \(R(x^{0} + V_{x} w_{r}^{k})\) has a computational complexity that is still dependent on the size of the full system. To reduce the complexity of this evaluation a gappy data reconstruction, based on [7], is applied. Like the GNAT approach gappy data reconstruction uses a reduced basis to reconstruct the data. Gappy data reconstruction starts by defining a mask vector n for a solution state x as

$$\begin{aligned}& n_{j} = 0 \quad \text{if } u_{j} \text{ is missing}, \\& n_{j} = 1 \quad \text{if } u_{j} \text{ is known}, \end{aligned}$$

where j denotes the j-th element of each vector. The mask vector n is applied point-wise to a vector by \((n,x)_{j}=n_{j} x_{j}\). This sets all the unobserved values to 0. Then, the gappy inner product can be defined as \((a,b)_{n}=((n,a),(n,b))\), which is the inner product of the each vector masked respectively. The induced norm is then \(( \Vert x \Vert _{n})^{2}=(x,x)_{n}\). Considering the reduction base obtained by MESS \(V_{\mathrm{gap}}=\{v^{i}\}_{i=1}^{r}\), now we can construct an intermediate “repaired” full size vector ũ from a reduced vector u with only g elements by

$$ \tilde{u} \approx \sum_{i=1}^{r} b_{i}v^{i}, $$
(3.6)

where the coefficients \(b_{i}\) need to minimise an error E between the original and repaired vector, which is defined as

$$ E = \Vert u-\tilde{u} \Vert ^{2}_{n}. $$
(3.7)

This minimisation is done by solving the linear system

$$ Mb=f, $$
(3.8)

where

$$ M_{ij} = \bigl(v^{i},v^{j} \bigr)_{n},\quad \text{and}\quad f_{i} = \bigl(u,v^{i}\bigr)_{n}. $$
(3.9)

From this solution ũ is constructed. Then the complete vector is reconstructed by mapping the reduced vectors elements to their original indices and filling the rest with the reconstructed values.

3.3 Reduced basis generation

As stated previously, both the nonlinear model order reduction method and the hyper-reduction method use a reduced basis. To generate such a basis, generally snapshot back-ended reduced basis methods are used. A prominent method for this in literature is the proper orthogonal decomposition method.

However, as shown in [3, 10], the way the proper orthogonal decomposition framework extracts information from the high-fidelity snapshot matrix is inherently linear. When used for nonlinear problems, it removes high-frequency components that are present and relevant to the dynamical evolution of these systems. Therefore, the MESS method for reduced basis generation is used.

3.3.1 Maximum entropy snapshot sampling

Let m and n be positive integers and \(m \gg n > 1\). Define a finite sequence \(X = (x_{1},x_{2}, \ldots,x_{n})\) of numerically obtained states \(x_{j}\in \mathbb{R}^{m}\) at time instances \(t_{j}\in \mathbb{R}\), with \(j \in \{1,2,\ldots,n\}\), of a dynamical system governed by either ODEs or DAEs. Provided probability distribution p of the states of the system, the second-order Rényi entropy of the sample X is

$$ H^{(2)}_{p}(X) = -\log \sum _{j=1}^{n} p(x_{j})^{2} = -\log \mathbb{E}\bigl(p(x_{j})\bigr), $$
(3.10)

with \(p_{j} \equiv p(x_{j})\) and where \(\mathbb{E}(p(X))\) is the expected value of the probability distribution p with respect to p itself. When n is large enough, according to the law of large numbers, the average of \(p_{1}, p_{2}, \ldots, p_{n}\) almost surely converges to their expected value,

$$ \frac{1}{n}\sum_{j=1}^{n}p(x_{j}) \rightarrow \mathbb{E}\bigl(p(X)\bigr)\quad \text{as }n\rightarrow \infty , $$
(3.11)

thus each \(p(x_{j})\) can be approximated by the sample’s average sojourn time or relative frequency of occurrence. To obtain this frequency of occurrence, considering a norm \(\Vert \cdot \Vert \) on \(\mathbb{R}^{m}\). Then the notion of occurrence can be translated into a proximity condition. In particular, for each \(x_{j}\in \mathbb{R}^{m}\) define the open ball that is centred at \(x_{j}\) and whose radius is \(\epsilon >0\),

$$ B_{\epsilon}(x) = \bigl\{ y\in \mathbb{R}^{m}\mid \Vert x - y \Vert < \epsilon \bigr\} , $$
(3.12)

and introduce the characteristic function with values

$$ \chi _{i}(x) = \textstyle\begin{cases} 1, & \text{if }x\in B_{\epsilon}(x_{i}), \\ 0, & \text{if }x\notin B_{\epsilon}(x_{i}). \end{cases} $$
(3.13)

Under the aforementioned considerations, the entropy of X can be estimated by

$$ \hat{H}^{(2)}_{p}(X) = -\log \frac{1}{n^{2}}\sum_{i=1}^{n}\sum _{j=1}^{n} \chi _{i}({x}_{j}). $$
(3.14)

Provided that the limit of the evolution of \(\hat{H}^{(2)}_{p}\) exists and measures the sensitivity of the evolution of the system itself [4, §6.6]. Then, for n large enough, a reduced sequence \(X_{\mathrm{r}}=(\bar{x}_{j_{1}},\bar{x}_{j_{2}},\ldots ,\bar{x}_{j_{r}})\), with \(r\leq n\), is sampled from X. This is done by requiring that the entropy of \(X_{\mathrm{r}}\) is a strictly increasing function of the index \(k\in \{1,2,\ldots ,r\}\) [11]. The state vector \(\bar{x}_{j_{k}}\) added to sampled snapshot space is the average value of all states in the selected ϵ-ball. A reduced basis \(V_{x}\) is then generated from \(X_{\mathrm{r}}\) with any orthonormalization process. It has been shown [10] that any such basis guarantees that the Euclidean reconstruction error of each snapshot is bounded from above by ϵ, while a similar bound holds true for future snapshots, up to a specific time-horizon. See Theorems 3.3 and 3.4, [10].

To estimate the parameter ϵ, which determines the degree of reduction within the MESS framework, the following optimisation approach is employed [3]. As no user input is now required this optimality requirement makes the MESS parameter free reduction method.

3.4 Reduced order multirate

Combining the previously seen techniques, we apply the model order reduction techniques only in the macro-step of the multirate integration scheme. However, due to the partitioned nature, this full system needs a special type of reduced basis matrix as it needs to reduce only the slow part. To illustrate an implementation of a reduced order multirate scheme, a first order implicit Euler scheme is described. The new reduction matrix is now given by \(\Phi \in \mathbb{R}^{r + n_{\mathrm{F}} \times M}\). Considering the split system defined by Equations (2.7a)–(2.7c) the reduction matrix is defined by

$$ \Phi = \begin{pmatrix} I_{n_{\mathrm{F}}} & 0 \\ 0 & V_{r} \end{pmatrix}. $$
(3.15)

Then in each macro time-step we only need to solve the following reduced system. This nonlinear least-squares problem is solved by the Gauß–Newton method, using Φ,

$$ s^{k} = \min_{\tilde{u}\in u^{0}+\mathcal{V}} \bigl\Vert J_{r}^{k}\Phi a+R^{k} \bigr\Vert _{2}. $$
(3.16)

Where R is defined for the whole macro step system as

$$ R(x) = q\biggl(t_{n+m},\frac{x - x_{n}}{H}\biggr) - j(t_{n+1},x) = 0. $$
(3.17)

Note however that now we have \(J_{r}^{k}\) as the full order Jacobian approximated by using the gappy-MESS function evaluations. Then once the solution for the macro time-step is obtained, the slow partition of this solution \(x_{\mathrm{S}}\) are used for the interpolated values needed in the intermediate fast steps.

4 Experiments

The verification of the reduced order multirate method, utilising the discussed methods, and the netlist parser to create the network equations is done through numerical experiments. For the first experiment a simple academic circuit consisting of resistors, capacitors and diodes is considered. Then a real world industrial circuit is considered as provided by STMicroelectronics. Although some simplification assumptions have been made about the parameters of the underlying network components.

4.1 Academic experiment

The academic circuit shown in Fig. 1 is a combination of a short diode chain and then a long ladder of diodes and resistors. Similar to a standard diode chain model [12], this model contains sufficient redundancy to make it eligible for model order reduction. Furthermore, increasing the resistance for each \(R_{i}\) with \(R < R_{i}< R_{i+1}\) makes that the ladder part of the circuits behaves on a slower timescale. This makes the circuit excellent for a time integration with the reduced order multirate approach. The simulation parameters are given in Table 1.

Figure 1
figure 1

The academic diode chain test model with redundancy

Table 1 Simulation parameters of the academic model

With the multirate factor \(f_{\mathrm{mr}} = 20\), it is evident that the multirate integration approach is more accurate than the single rate method for the same macro step sizes, see Fig. 2. The order however is equal due to the pollution introduced by the macro step. This increase of accuracy comes at a cost. The right figure shows that the multirate technique carries a burden of extra computational effort. This additional burden is then undercut by applying the MOR, which pushes the computational versus accuracy line further to the left.

Figure 2
figure 2

Computational effort of the numerical schemes, where the error is plotted against the computation time in seconds. The error is defined as the absolute value between the computed voltage and reference voltage for the output node

4.2 Solar panel simulation

As final industrial application STMicroelectronics has provided a test case from their development team. The test case concerns a transient analysis of a solar panel circuit. In this case we consider a silicon photovoltaic cell that is composed of two layers of semiconducting material, e.g. silicon or gallium arsenide, with different doping. The process of converting solar radiation into electricity is based on the photovoltaic effect. To model the photovoltaic effects taking place in the solar panel, a mathematical model is used based on the construction of an equivalent circuit. When the cell is lit, the generated power can be modelled as a current source. Therefor, an equivalent circuit of the photovoltaic cell is shown in Fig. 3.

Figure 3
figure 3

The equivalent circuit of a photovoltaic cell

The full solar panel circuit is a conglomeration of photovoltaic cells, Fig. 3, that are linked in series and parallel. This grid of photovoltaic cells is then connected to a DC-DC buck converter to stabilise the output. The full schematic of the solar panel circuit is shown in Fig. 4. As the voltages and currents generated by the solar panel vary significantly slower than the operating voltages in the buck converter there is an inherent multirate advantage. Due to the replication in the structure of the solar panel, model order reduction can be used to significantly reduce the complexity of the panel simulation.

Figure 4
figure 4

The industrial solar panel test model with redundancy. Here a 4 by 3 grid of photovoltaic cells are linked together in series and parallel and connected to a DC/DC converter

The full test case for the silicon photovoltaic solar panel is run for a series and parallel grid that is 35 by 35, which makes the full dimensional system consist of 2422 equations. The reduction parameter selection procedure provides us with \(\epsilon = 0.43\) which leads to quite a significant reduction. As shown in Table 2 the dimension of the slow subsystem of photovoltaic cells is decreased from 2416 to only 6 equations, and the hyper-reduced dimension of the nonlinear function is reduced to only 8 equations.

Table 2 Simulation parameters of the industrial model

From the figures in Fig. 5, we see that the academically achieved results are replicated for an industrial test case, Fig. 2. Due to the model order reduction there is a substantial decrease in computation time, with the reduced order multirate scheme being roughly 6.35 times faster than the multirate integration scheme, without losing accuracy.

Figure 5
figure 5

Computational effort of the numerical schemes applied to the solar panel

5 Conclusion and outlook

In this paper the numerical approach for implementing the reduced order multirate time-integration method has been outlined in the context of circuit simulation with the novel BDF approach. Using Modified Nodal Analysis for the generation of the network equations has been shown to be an effective and efficient approach. The resulting equations are suitable for nonlinear model order reduction, as has been shown by the application of the Gauß–Newton with Approximated Tensors method extended with a gappy data reconstruction approach. The reduced bases for both model order reduction techniques are obtained by the Maximum Entropy Snapshot Sampling method. The reduction parameter is generated using a parameter free optimisation method. The whole mathematical framework has been applied to both an academic and an industrial test case, supplied by STMicroelectronics. A significant reduction in computation time can be seen whilst the accuracy of the multirate approach closely approximated. For actual use in the optimisation flow of STMicroelectronics a more robust framework would need to be created but the numerical results show a significant potential.

Availability of data and materials

Code and data are not available in a machine readable way, all data was generated from the shown circuits and parameters presented in this paper.

Abbreviations

BDF:

Backward Differentiation Formula

DAE:

Differential-Algebraic Equation

GNAT:

GaußNewton with Approximated Tensors

MESS:

Maximum Entropy Snapshot Sampling

MNA:

Modified Nodal Analysis

MOR:

Model Order Reduction

ODE:

Ordinary Differential Equation

ROMR:

Reduced Order Multirate

References

  1. Bannenberg M, Ciccazzo A, Günther M. Coupling of model order reduction and multirate techniques for coupled dynamical systems. Appl Math Lett. 2021;112:106780.

    Article  MathSciNet  Google Scholar 

  2. Bannenberg M, Ciccazzo A, Günther M. Reduced order multirate schemes for coupled differential-algebraic systems. Appl Numer Math. 2021;168:104–14.

    Article  MathSciNet  Google Scholar 

  3. Bannenberg MW, Kasolis F, Günther M, Clemens M. Maximum entropy snapshot sampling for reduced basis modelling. Compel. 2022;41:954–66.

    Article  Google Scholar 

  4. Broer H, Takens F. Dynamical systems and chaos. New York: Springer; 2011.

    Book  Google Scholar 

  5. Carlberg K, Bou-Mosleh C, Farhat C. Efficient non-linear model reduction via a least-squares Petrov–Galerkin projection and compressive tensor approximations. Int J Numer Methods Eng. 2011;86(2):155–81.

    Article  MathSciNet  Google Scholar 

  6. Carlberg K, Farhat C, Cortial J, Amsallem D. The GNAT method for nonlinear model reduction: effective implementation and application to computational fluid dynamics and turbulent flows. J Comput Phys. 2013;242:623–47.

    Article  MathSciNet  Google Scholar 

  7. Everson R, Sirovich L. Karhunen–Loeve procedure for gappy data. J Opt Soc Am A. 1995;12(8):1657–64.

    Article  Google Scholar 

  8. Günther M, Feldmann U, ter Maten J. Modelling and discretization of circuit problems. In: Handbook of numerical analysis. vol. 13. 2005. p. 523–659.

    MATH  Google Scholar 

  9. Günther M, Rentrop P. Multirate row methods and latency of electric circuits. Appl Numer Math. 1993;13(1–3):83–102.

    Article  MathSciNet  Google Scholar 

  10. Kasolis F, Clemens M. Maximum entropy snapshot sampling for reduced basis generation. arXiv preprint. 2020. arXiv:2005.01280.

  11. Kasolis F, Zhang D, Clemens M. Recurrent quantification analysis for model reduction of nonlinear transient electro-quasistatic field problems. In: International conference on electromagnetics in advanced applications (ICEAA 2019). 2019. p. 14–7.

    Chapter  Google Scholar 

  12. Verhoeven A, Ter Maten J, Striebel M, Mattheij R. Model order reduction for nonlinear ic models. In: IFIP conference on system modeling and optimization. Berlin: Springer; 2007. p. 476–91.

    Google Scholar 

  13. Wanner G, Hairer E. Solving ordinary differential equations II. vol. 375. Berlin: Springer; 1996.

    MATH  Google Scholar 

  14. Willcox K. Unsteady flow sensing and estimation via the gappy proper orthogonal decomposition. Comput Fluids. 2006;35(2):208–26.

    Article  Google Scholar 

Download references

Acknowledgements

The authors declare that they have no acknowledgements.

Funding

The authors are indebted to the funding given by the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie Grant Agreement No. 765374, ROMSOC. Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and Affiliations

Authors

Contributions

This whole paper was a joint effort by MB, AC and MG. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Marcus W. F. M. Bannenberg.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bannenberg, M.W.F.M., Ciccazzo, A. & Günther, M. Reduced order multirate schemes in industrial circuit simulation. J.Math.Industry 12, 12 (2022). https://doi.org/10.1186/s13362-022-00127-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13362-022-00127-w

Keywords