An a priori error estimate for interior penalty discretizations of the CurlCurl operator on nonconforming meshes
 Raffael Casagrande^{1}Email author,
 Ralf Hiptmair^{1} and
 Joerg Ostrowski^{2}
https://doi.org/10.1186/s1336201600219
© Casagrande et al. 2016
Received: 2 February 2016
Accepted: 27 April 2016
Published: 5 May 2016
Abstract
We prove an apriori error estimate for regularized CurlCurl Problems which are discretized by the Interior Penalty/Nitsche’s Method on meshes nonconforming across interfaces. It is shown that the total error can be bounded by the best approximation error which in turn depends on the concrete choice of the approximation space \(V_{h}\). In this work we show that if \(V_{h}\) is the space of edge functions of the first kind of order k we can expect (suboptimal) convergence \(O(h^{k1})\) as the mesh is refined. The numerical experiments in (Casagrande et al., SAM Report 201432, ETH Zürich, 2014) indicate that this bound is sharp for \(k=1\). Moreover it is shown that the regularization term can be made arbitrarily small without affecting the error in the \(\lvert \cdot \rvert_{\mathbf {curl}}\) seminorm. A numerical experiment shows that the regularization parameter can be chosen in a wide range of values such that, at the same time, the discrete problem remains solvable and the error due to regularization is negligible compared to the discretization error.
Keywords
discontinuous Galerkin nonconforming meshes interior penalty electromagnetics magnetostatics CurlCurl operatorMSC
65N12 65N30 78M101 Introduction
Note that the solution of the boundary value problem (1)(2) is only unique up to a gradient field (if Ω is simply connected), which is not of importance if one is only interested in the magnetic field B. Thus it is possible to solve the ungauged problem (1)(2) if the current \(\mathbf {j}^{i}\) lies in the range of the system matrix [2]. The latter is hard to enforce on nonconforming meshes (cf. Section 6) and it is simpler to gauge the formulation (1)(2) or to add a regularization term to (1) so that the system matrix has full rank. In case of adding a regularization term to (1), one introduces a modeling error which must not dominate the approximation error of the numerical scheme.
Our goal is to construct a method that approximates the solution of (1)(2) in a way that is independent of the ‘nonconformity’ of the submeshes at the common interface. This problem has been tackled successfully in the framework of Mortar methods where the continuity constraints are incorporated directly into the trialspace [3] or they are enforced by additional Lagrange Multipliers [4]. However they come at the price of introducing either nonlocal shape functions or additional unknowns.
Another approach uses the Discontinuous Galerkin (DG) framework to solve problem (1)(2) in the presence of hanging nodes. In [5] problem (1)(2) is regularized by adding a \(\nabla( \nabla \cdot \mathbf {A} )\) term to (1) and is then solved by the locally discontinuous Galerkin method. However because of the additional regularization term, additional assumptions on the smoothness of the solution have to be made to prove convergence. Alternatively one can use a mixed DG formulation and enforce the gauge condition \(\nabla\cdot(\mu^{1}\mathbf {A}) = 0\) explicitly to avoid the introduction of a regularization term [6]. The stability of this method for arbitrary, sliding meshes remains unclear: In [6] it is proven that the mixed method yields the expected rates of convergence on conforming meshes and the experimental results in [7] show that it also works on adaptively refined meshes with hanging nodes. However, in light of the results in Section 6.1 and in [8] it is not clear that the constant in the infsup condition of [6] is independent of the ‘nonconformity’ of the submeshes at the common interface.
We start our discussion in Section 2 by introducing Discontinuous Galerkin (DG) notations that were already introduced in [1] and which are needed to state the interior penalty formulation of (3)(4) in Section 3. Section 3 also proves an apriori bound of the total error in terms of the best approximation error for piecewisepolynomial test and trial spaces \(V_{h}\). In Section 4 we analyze the particular case where \(V_{h}\) is the space of kth order edge functions, \(R^{k}\). Combining the results of Sections 3 and 4 we get rates of convergence for the regularized problem (3)(4). Section 5 is devoted to the choice of the local length scale appearing in the Interior Penalty formulation and presents numerical experiments underlining the results of Sections 3, 4. Section 6 discusses the role of the regularization parameter ε and how to choose it. We end our presentation with a short conclusion and outlook in Section 7.
2 Preliminaries
Before we can introduce the Symmetric Weighted Interior penalty (SWIP) formulation of (3)(4) we give some definitions and notations (cf. [1]):
Subdomains and submeshes:
Let us assume that the domain Ω, on which (3)(4) is posed, is a simply connected polyhedron with Lipschitz boundary. Furthermore we assume Ω to be split into two nonoverlapping subdomains, \(\overline{\Omega_{1}} \cup\overline{\Omega_{2}} = \overline{\Omega}\).
We introduce a sequence of simplical meshes \(\mathcal{T}_{\mathcal{H}} = ( \mathcal{T}_{h} )_{h \in\mathcal{H}}\) on Ω. Here \(\mathcal{H}\) denotes a countable subset of \(\mathbb{R}^{+}\) having 0 as the only accumulation point. For each \(h \in\mathcal{H}\) we let \(\mathcal{T}_{h} \in\mathcal {T}_{\mathcal{H}}\) denote a particular mesh in the sequence \(\mathcal {T}_{\mathcal{H}}\) and we let \(T \in\mathcal{T}_{h}\) be a mesh element (tetrahedron). The meshwidth is defined as \(h= \max_{T \in\mathcal{T}_{h}} h_{T}\), where \(h_{T}\) is the diameter of element T.
We assume that each mesh \(\mathcal{T}_{h}\), which covers Ω, can be split into two conforming, nonoverlapping submeshes, \(\mathcal {T}_{h} = \mathcal{T}_{h,1} \cup\mathcal{T}_{h,2}\), that cover \(\Omega _{1}\) and \(\Omega_{2}\), respectively. As before we define \(\mathcal{T}_{\mathcal{H},1} = (\mathcal {T}_{h,1} )_{h \in\mathcal{H}}\) and \(\mathcal{T}_{\mathcal{H},2} = (\mathcal{T}_{h,2} )_{h \in\mathcal{H}}\).
Mesh assumptions:
Magnetic permeability:
We assume there exists a partition \(P_{\Omega}= \{ \Omega_{i, \mu} \}\) such that each \(\Omega_{i, \mu}\) is a polyhedron and such that the permeability \(\mu>0\) is constant on each \(\Omega _{i,\mu}\). Furthermore the mesh sequence \(\mathcal{T}_{\mathcal{H}}\) is compatible with the partition \(P_{\Omega}\): For each \(\mathcal {T}_{h} \in\mathcal{T}_{\mathcal{H}}\), each element \(T \in\mathcal{T}_{h}\) belongs to exactly one \(\Omega_{i,\mu} \in P_{\Omega}\). I.e. the magnetic permeability is constant on each element but it is allowed to jump element boundaries, and in particular over the nonconforming interface \(\Gamma:= \overline{\Omega}_{1} \cap \overline {\Omega}_{2}\).
Polynomial approximation:
Mesh faces, jump and average operators:
The following lemma relates the trace of a polynomial function to its \(L^{2}\) norm on the element (cf. [10], Lemma 1.46):
Lemma 1
(Discrete trace inequality)
Function spaces:
3 Symmetric Weighted Interior penalty (SWIP) formulation
Remark 1
If \(V_{h} \subseteq \mathbf {H}(\mathbf {curl}; \Omega)\), then all inner tangential jumps in (9) will drop out [11], Lemma 3.8, and only jumps at the boundary remain, i.e. we are left with a standard FEM formulation where the inhomogeneous boundary conditions (2) are enforced in a weak sense.
3.1 A priori error estimate
In the following we derive an error estimate in the ‘energynorm’ for the variational problem (8).
Regularity of the exact solution:
In order for the righthand side to be wellposed we assume \(\mathbf {j}^{i} \in L^{2}(\Omega)^{3}\) and \(\mathbf {g}_{D} = L^{2}(\partial\Omega)^{3}\).
We begin the proof of the a priori error estimate by showing that the exact solution A fulfills equation (8):
Lemma 2
(Consistency)
Proof
Lemma 3
(Bound on consistency term)
Proof
Using Lemma 3 we can finally prove discrete coercivity.
Lemma 4
(Discrete coercivity)
Proof
Lemma 5
(Boundedness)
Proof
Finally, we can combine the previous results into one theorem.
Theorem 1
(Error estimate)
This theorem tells us that the total error is bounded by the best approximation error (w.r.t. suitable norms). Note that we didn’t make any assumption on how the submeshes \(\mathcal {T}_{h,1}\) and \(\mathcal{T}_{h,2}\) meet at Γ. In order to get rates of convergence we will have to make additional assumptions about the approximation space \(V_{h}\) and the exact solution A. This will be the topic of Section 4.
Proof of Theorem 1
Remark 2
4 Rate of convergence for edge functions
The following theorem then gives an upper bound for the best approximation error of Theorem 1:
Theorem 2
Remark 3
By combining Theorem 2 with Theorem 1 we see that for a sufficiently smooth exact solution A, the total error \(\lVert \mathbf {A}\mathbf {A}_{h} \rVert_{\mathrm{SWIP}} = O(h^{k1})\) if kth order edge functions are used. In comparison to standard FEM on conforming meshes one order of convergence is lost. Theoretically it is possible that there exists another projector \(\tilde{\pi}_{h}\) which would give a better rate of convergence, but numerical experiments show that Theorem 2 is sharp for \(k=1\) (see Section 5).
In order to prove the above theorem we will make use of two Lemmas to bound the face contributions.
Lemma 6
For the proof of Lemma 6 we refer the reader to [13], Lemma 5.52 (which is proven elementwise).
Lemma 7
Proof
Using these Lemmas we can finally give a bound for \(\lVert \mathbf {A}\mathbf {\pi }_{h} \mathbf {A} \rVert_{{\mathrm{SWIP}},*}\).
Proof of Theorem 2
Remark 4
From the proof of Theorem 2 it is clear that for h sufficiently small the term \(T_{3}\) dominates the other three terms and is thus responsible for the loss of one order of convergence as pointed out in Remark 3. Interestingly \(T_{3}\) sums the jump terms only over the faces \(\mathcal {F}^{b, \Gamma}_{h}\). This suggests that it suffices to use \((k+1)\)th order edge functions in elements adjacent to \(\mathcal{F}^{b,\Gamma}\) and kth order edge functions everywhere else to achieve \(O(h^{k})\) order convergence. This can be implemented easily by using a hierarchical basis for the edge functions [14].
5 The local length scale \(a_{F}\) and hconvergence

\(a_{F}^{(1)} := \frac{1}{2}(h_{T_{1}} + h_{T_{2}})\) if \(F \in\mathcal {F}^{i}_{h}\) and \(a_{F}^{(1)}=h_{T}\) for \(F \in\mathcal{F}^{b}_{h}\), see [10], Remark 4.6,

\(a_{F}^{(2)} := \min(h_{T_{1}}, h_{T_{2}})\) if \(F \in\mathcal{F}^{i}_{h}\) and \(a_{F}^{(2)}=h_{T}\) for \(F \in\mathcal{F}^{b}_{h}\), see [6],

\(a_{F}^{(3)} := h_{F}\) if \(F \in\mathcal{F}_{h}\), see [9, 10],
Let us now discuss the precise conditions on the mesh for each choice of \(a_{F}\): For \(a_{F}^{(1)}\), \(a_{F}^{(2)}\) we require \(\mathcal{T}_{\mathcal{H}}\) to be quasiuniform at the sliding interface Γ:
Definition 1
Lemma 8
If the mesh is quasiuniform at Γ then \(a_{F}^{(1)}\), \(a_{F}^{(2)}\) fulfill conditions (11), (21) and the constants \(\sigma_{\mathrm{max}}\), \(\varsigma_{1}\), \(\varsigma_{2}\) are independent of the way \(\mathcal{T}_{\mathcal{H},1}\), \(\mathcal {T}_{\mathcal{H},2}\) intersect at Γ.
Proof
\(a_{F}^{(1)} \geq\frac{1}{2} h_{T_{i}}\) follows immediately from the definition for \(i=1,2\). For the other direction we use (27) and get \(a_{F}^{(1)} \leq\frac{1}{2}(1 + \sigma_{1}^{1}) h_{T_{i}}\). Moreover, \(\sigma_{1} h_{T_{i}} \leq a_{F}^{(2)} \leq h_{T_{i}}\). □
The lemma above asserts that the choices \(a_{F}^{(1)}\), \(a_{F}^{(2)}\) lead to a method that converges independently of the way that the two meshsequences \(\mathcal{T}_{\mathcal{H},1}\), \(\mathcal{T}_{\mathcal {H},2}\) intersect at Γ. In particular the faces can be very tiny ‘slivers’ (i.e. triangles with high aspect ratio). But note that the choice of \(a_{F}\) determines the required minimum value of the penalty parameter (see Lemma 4).
By substituting \(a_{F}^{(3)}\) into (21) we see that we need an estimate of the form \(h_{F} \geq\varsigma_{1} h_{T}\) in order for Theorem 2 to hold. However if two meshes are sliding against each other such an estimate is not feasible since \(h_{F}\) can become arbitrarily small in comparison to \(h_{T}\). In other words, the constant \(\varsigma_{1}\) depends on the way \(\mathcal{T}_{\mathcal{H},1}\) intersects with \(\mathcal{T}_{\mathcal{H},2}\). Nevertheless using \(a_{F}^{(3)}\) in the variational formulation 8 seems to work in practice (see below).
5.1 Numerical examples
For \(V_{h} = P^{1}(\mathcal{T}_{h})^{3}\) we observe the rate of convergence \(O(h)\), i.e. there is no loss of one order of accuracy. This is because \(\mathbb{P}^{1}(\mathcal{T}_{h})^{3}\) spans the full polynomial space (see [11] for a proof).
Remark 5
The observed rates for \(k=2\) and \(k=3\) are higher than the rates \(O(h)\), \(O(h^{2})\) which we expect from Theorem 2. This is due to the better approximation properties of the edge functions in the inside of the two hemispheres (cf. Remark 4).
Remark 6
Strictly speaking this numerical experiment does not fit the framework developed in Sections 3, 4 because Ω is not a polyhedron. Extending the theory to domains with curved boundary ∂Ω is beyond the scope of this paper but Figure 5 suggests that the order of convergence for \(k=2\), \(k=3\) is the same as for polyhedral domains.
6 The regularization parameter ε
So far we have looked at the regularized system (3)(4) and it was shown that the proposed method yields the expected rates of convergence for \(\varepsilon> 0\). However, genuine magnetostatics amounts to choosing \(\varepsilon= 0\). We will consider two approaches to solve the system (3)(4) with \(\varepsilon=0\): On the one hand we will try to set \(\varepsilon=0\) directly and on the other hand we will study the effect of choosing ε small enough such that the error due to regularization is negligible.
6.1 The case \(\varepsilon=0\)
If we set \(\varepsilon=0\), the boundary value problem (3)(4) does not have a unique solution. Indeed the continuous curlcurl operator has an infinitedimensional kernel and the nonzero eigenvalues are well separated from 0 [13], Corollary 4.8. If one uses \(\mathbf {H}(\mathbf {curl})\) conforming edge functions of the first kind on a conforming mesh it can be shown that the discrete curlcurl operator has a (finitedimensional) kernel and that the discrete eigenvalues are well separated from it [13], Discrete Friedrichs inequality, Lemma 7.20, i.e. edge functions of the first kind yield a spectrally accurate discretization of the curlcurl operator. From a theoretical point of view it remains unclear whether this property carries over to the SWIP formulation (8), cf. [15].
Therefore the spectrum of the \(a_{\mathrm{h}}^{\mathrm{SWIP}}\) bilinear form is investigated in a numerical experiment. The setup is very similar to the one in the previous section: The domain Ω consists of two halfspheres which can be rotated against each other by an angle θ. However this time we only assemble the matrix of the \(a_{\mathrm{h}}^{\mathrm{SWIP}}\) bilinear form with \(\varepsilon=0\), \(a_{F} = a_{F}^{(3)}\) ^{2} and compute its eigenvalues using the eig routine of MATLAB R2013a.
The previous considerations indicate that the \(a_{\mathrm{h}}^{\mathrm{SWIP}}\) bilinear form is not suitable to solve the Maxwell Eigenvalue Problem. However in this work we are concerned with the curlcurl source problem (3)(4). Although the Galerkin matrix becomes singular for \(\varepsilon=0\) we can in principle still solve the linear system if it is consistent, i.e. if the righthand side lies in the range of the Galerkin matrix. Then the solution \(\mathbf {A}_{h}\) is not unique anymore, but \(\mathbf {curl}{\mathbf {A}_{h}}\) is.
We attempt to solve the linear system of equations using the conjugate gradient (CG) method [2]. In [16] it is shown that the CG method converges for consistent, symmetric positive semidefinite problems and that its rate of convergence is determined by the nonzero eigenvalues. In particular, the number of CG iterations is related to the generalized condition number \(\kappa= \frac{\lambda_{\mathrm{max}}}{\lambda_{{\mathrm{min}}}}\) where \(\lambda_{\mathrm{min}}\) is the smallest, nonzero eigenvalue of the system matrix. If again we take a look at Figure 7 it becomes clear that \(\kappa\to\infty\) as \(\theta\to0\). I.e. the number of CG iterations should increase as \(\theta\to0\).
Number of CG iterations for \(\pmb{\theta\to0}\) , \(\pmb{h=0.359}\) , \(\pmb{\varepsilon=0}\) ; the discretization is based on \(\pmb{R^{2}}\) edge functions
θ [rad]  No preconditioner  ILUPACK 

10^{−1}  1,118  135 
10^{−2}  3,705  214 
10^{−3}  3,731  320 
10^{−4}  6,102  426 
Remark 7
Although the righthand side \(\mathbf {j}^{i}\) chosen in the numerical experiment above is clearly divergence free, there is no guarantee that its discrete counterpart \(\ell_{h}\) is so too, i.e. it is not clear that the righthand side vector b, that is associated with \(\ell_{h}\), lies in the range of the system matrix. We have investigated this by splitting the righthand side vector b into a part that lies in the kernel of the system matrix, \(\tilde {\mathbf {b}}\), and into it’s orthogonal complement, \(\tilde{\mathbf {b}}^{\perp}\). It turned out that for all angles \(\lVert\tilde{\mathbf {b}}^{\perp}\rVert_{2} / \lVert \mathbf {b} \rVert_{2} \approx 10^{9}\), which seems to be sufficient for CG to converge.
We can conclude that setting \(\varepsilon= 0\) is in principal possible if the right hand side vector b lies in the range of the system matrix. However checking this for nonzero right hand sides \(\mathbf {j}^{i}\) is a nontrivial task because we don’t know apriori the kernel of the system matrix \(a_{h}^{\mathrm{SWIP}}\). Moreover the system matrix becomes illconditioned as the angle \(\theta \to0\) which causes an increase in the number of CG iterations.
Remark 8
For \(\mathbf {H}(\mathbf {curl})\) conforming discretizations, which fulfill the discrete sequence property, the kernel of the system matrix is known. In particular it is easily proven that \(\operatorname {div}\mathbf {j}^{i} = 0\) implies that \(\ell_{h}\) lies in the range of the system matrix. Unfortunately it is not clear whether this property carries over to the SWIP formulation (8) because to the best of our knowledge there exists no characterization of the kernel of \(a_{h}^{\mathrm{SWIP}}\) on arbitrarily nonconforming meshes.
6.2 The case \(0<\varepsilon\ll1\)
It is thus desirable to choose ε small such that \(\lVert\nabla \times(\mathbf {A}^{\varepsilon} \mathbf {A}^{0}) \rVert _{L^{2}(\Omega)^{3}} \ll\lVert \nabla\times({\mathbf {A}_{h}^{\varepsilon} \mathbf {A}^{\varepsilon}})\rVert _{L^{2}(\Omega)^{3}}\). However, as \(\varepsilon\to0\) the discrete problem becomes illposed and solvers typically fail to converge, cf. Remark 2, Section 6.1.

For small problems we use the Sparse Cholesky Decomposition of PARDISO [18] (Intel MKL Version 11.2) and solve the linear system of equations directly.

For problems whose Cholesky Decomposition does not fit into memory we use the Conjugate Gradient Method together with ILUPACK [17] as a preconditioner (using the settings of Section 6.1).
Remark 9
We are only interested in the curl of the solution, i.e. the magnetic field B. If we were to look at A instead of \(\nabla\times \mathbf {A}\) then \(\lVert \mathbf {A}^{\varepsilon}_{h}  \mathbf {A}^{\varepsilon}\rVert_{L^{2}(\Omega)^{3}}\) would not be independent of ε as can be seen from Theorem 1.
The following Lemma gives us a guideline for choosing ε.
Lemma 9
The first part of this lemma is proven in [19], Lemma 2.1, and for the second part we use a result of [20], Section 4.
By comparing (29) with (28) we see that \(\lVert\mu^{1} \nabla\times(\mathbf {A}^{\varepsilon}_{h}  \mathbf {A}^{\varepsilon}) \rVert_{L^{2}(\Omega)^{3}} = O(h^{k1})\) for k order edge functions. Therefore ε should be chosen such that \(\varepsilon\sim h^{k1}\) as the mesh is refined.
Numerical example:
We consider the same setup as in the previous section (cf. Figure 3) with but we choose a different μ for the upper and lower hemisphere. The analytic solution is chosen as \(\mathbf {A}^{0} = (\sin y, 0, \mu\sin x)\). \(\mathbf {j}^{i}\) is chosen such that \(\mathbf {A}^{0}\) fulfills (3)(4) with \(\varepsilon = 0\).
We note that the errors are almost identical for both choices of μ. Moreover, we observe that for \(\varepsilon< 10^{3}\) the discretization error \(\lVert\mu^{1/2}\nabla\times(\mathbf {A}_{h}^{\varepsilon} \mathbf {A}^{\varepsilon}) \rVert_{L^{2}(\Omega)}\) (which here includes the error due to boundary approximation, cf. Remark 6) clearly dominates the regularization error \(\lVert\mu^{1/2}\nabla \times(\mathbf {A}^{\varepsilon} \mathbf {A}^{0}) \rVert_{L^{2}(\Omega)}\) whereas for \(\varepsilon> 10^{3}\) the regularization error is dominated by the discretization error. This is what we can expect from the previous discussion. In fact, if we use Lemma 9 and fit the two hemispheres into a cube of side length \(l_{\mathrm{max}}=2\), we get \(\alpha= \pi^{2}/2\). The black, dashed line in Figure 8 visualizes the corresponding estimate (29) and we see that for ε large the behavior is clearly linear, as proven in Lemma 9, and that the estimate is valid even though \(\mathbf {g}_{D} \neq0\) and μ is not constant.
Remark 10
The same results are obtained if CG together with ILUPACK is used. For brevity we omit these results here.
Relative runtimes for \(\pmb{\varepsilon\to0}\) , \(\pmb{h=0.359}\) ; the discretization is based on \(\pmb{R^{2}}\) edge functions and \(\pmb{\theta=10^{4}}\) rad. The runtimes have been normalized with the runtime for \(\pmb{\varepsilon=10^{1}}\)
ε  PARDISO ^{a}  ILUPACK ^{b} 

10^{−1}  1  1 
10^{−2}  1.01  1.41 
10^{−3}  1.01  1.42 
10^{−4}  1.02  1.43 
10^{−5}  0.98  1.42 
We can thus choose ε (almost) arbitrarily small without affecting the discretization error \(\lVert\mu^{1/2}\nabla \times(\mathbf {A}_{h}^{\varepsilon} \mathbf {A}^{\varepsilon}) \rVert _{L^{2}(\Omega)}\) and incurring rising cost for solving the resulting linear systems of equations. In other words, one should choose ε as small as possible such that the resulting linear system can still be solved.
7 Conclusion and outlook
We have proved apriori error estimators for the interior penalty formulation of the regularized curlcurl source problem (3)(4); If the solution is approximated by kth order edge functions we can expect at least convergence of order \(O(h^{k1})\) (provided the exact solution is sufficiently smooth). In particular, for \(k=1\) no convergence was observed in a numerical experiment [1], which implies that our result is sharp. The reason for this is that \(R^{k}\) does not span the full polynomial space \(\mathbb{P}^{k}\).
The bounds require the mesh to be quasiuniform at the sliding interface but do not make any assumptions on how the submeshes abut at the sliding interface nor does the error estimate depend on it. This is confirmed by the numerical experiments and it is observed that the approximation is stable independent of the way the submeshes intersect.
Moreover the role of the regularization parameter ε has been investigated; For practical purposes one can choose ε (almost) arbitrarily small and solve the discrete problem with a direct solver or by using the preconditioned conjugate gradient method. The error due to regularization is then dominated by the discretization error of the regularized problem and is negligible.
Outlook:
The proof of Theorem 2 suggests that it suffices to use 2nd order edge functions solely in elements adjacent to the nonconforming interface, respectively boundary faces, to achieve \(O(h)\) convergence. This would reduce the required number of unknowns drastically and should be pursued for practical applications.
The choices \(a_{F}^{(1)}\) and \(a_{F}^{(2)}\) yield qualitatively the same results. In particular the smallest nonzero eigenvalues also tend to 0 as \(\varepsilon\to0\), cf. Figure 7.
The ILU factorization is built from the system matrix with \(\varepsilon=10^{6}\) and the parameters for ILUPACK are: type sol = 0, partitioning=3, flags=1,1, inv. droptol=5, threshold ILU=0.1, condest=1e2, residual tol. = 5e6.
Declarations
Acknowledgements
The authors acknowledge the contribution of Christoph Winkelmann to the Finite Element Framework HyDi. This work has been funded by the Comission for Technology and Innovation (CTI), Switzerland, funding application No. 14712.1 PFIWIW and ABB Switzerland Corporate Research.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Casagrande R, Winkelmann C, Hiptmair R, Ostrowski J. DG treatment of nonconforming interfaces in 3D curlcurl problems. SAM report 201440, ETH Zurich; 2014. https://www.sam.math.ethz.ch/sam_reports/reports_final/reports2014/201440.pdf. Accessed 5 Jan 2016.
 Bebendorf M, Ostrowski J. Parallel hierarchical matrix preconditioners for the curlcurl operator. J Comput Math. 2009;27:62441. MathSciNetView ArticleMATHGoogle Scholar
 Rapetti F, Maday Y, Bouillault F, Razek A. Eddycurrent calculations in threedimensional moving structures. IEEE Trans Magn. 2002;38(2):6136. View ArticleGoogle Scholar
 Wohlmuth I. Discretization methods and iterative solvers based on domain decomposition. Berlin: Springer; 2001. View ArticleMATHGoogle Scholar
 Perugia I, Schötzau D. The hplocal discontinuous Galerkin method for lowfrequency timeharmonic Maxwell equations. Math Comput. 2003;72:1179214. MathSciNetView ArticleMATHGoogle Scholar
 Houston P, Perugia I, Schötzau D. Mixed discontinuous Galerkin approximation of the Maxwell operator: nonstabilized formulation. J Sci Comput. 2005;22/23:31546. MathSciNetView ArticleMATHGoogle Scholar
 Houston P, Perugia I, Schötzau D. Nonconforming mixed finiteelement approximations to timeharmonic eddy current problems. IEEE Trans Magn. 2004;40(2):126873. View ArticleGoogle Scholar
 Buffa A, Houston P, Perugia I. Discontinuous Galerkin computation of the Maxwell eigenvalues on simplicial meshes. J Comput Appl Math. 2007;204(2):31733. MathSciNetView ArticleMATHGoogle Scholar
 Stenberg R. Mortaring by a method of J. A. Nitsche. In: Computational mechanics; 1998. Google Scholar
 Di Pietro DA, Ern A. Mathematical aspects of discontinuous Galerkin methods. Berlin: Springer; 2012. View ArticleMATHGoogle Scholar
 Casagrande R. Sliding interfaces for eddy current simulations. Master’s thesis, ETH Zürich; 2013. Google Scholar
 Brenner SC, Scott LR. The mathematical theory of finite element methods. 3rd ed. Berlin: Springer; 2008. View ArticleMATHGoogle Scholar
 Monk P. Finite element methods for Maxwell’s equations. New York: Oxford University Press; 2003. View ArticleMATHGoogle Scholar
 Bergot M, Duruflé M. Highorder optimal edge elements for pyramids, prisms and hexahedra. J Comput Phys. 2013;232:189213. MathSciNetView ArticleMATHGoogle Scholar
 Buffa A, Perugia I. Discontinuous Galerkin approximation of the Maxwell eigenproblem. SIAM J Numer Anal. 2006;44(5):2198226. MathSciNetView ArticleMATHGoogle Scholar
 Kaasschieter EF. Preconditioned conjugate gradients for solving singular systems. J Comput Appl Math. 1988;24:26575. MathSciNetView ArticleMATHGoogle Scholar
 Bollhöfer M, Saad Y. Multilevel preconditioners constructed from inversebased ILUs. SIAM J Sci Comput. 2006;27(5):162750. MathSciNetView ArticleMATHGoogle Scholar
 Schenk O, Gärtner K. Solving unsymmetric sparse systems of linear equations with PARDISO. In: Sloot PMA, Tan CJK, Dongarra JJ, Hoekstra AG, editors. Computational science  ICCS 2002, part II; 2002. p. 35563. View ArticleGoogle Scholar
 Reitzinger S, Schöberl J. An algebraic multigrid method for finite element discretizations with edge elements. Numer Linear Algebra Appl. 2002;9:22338. MathSciNetView ArticleMATHGoogle Scholar
 Costabel M, Dauge M. Maxwell eigenmodes in tensor product domains (2006). https://perso.univrennes1.fr/monique.dauge/publis/CoDa06MaxTens2.pdf. Accessed 18 Apr 2016.