 Research
 Open Access
 Published:
An a priori error estimate for interior penalty discretizations of the CurlCurl operator on nonconforming meshes
Journal of Mathematics in Industryvolume 6, Article number: 4 (2016)
Abstract
We prove an apriori error estimate for regularized CurlCurl Problems which are discretized by the Interior Penalty/Nitsche’s Method on meshes nonconforming across interfaces. It is shown that the total error can be bounded by the best approximation error which in turn depends on the concrete choice of the approximation space \(V_{h}\). In this work we show that if \(V_{h}\) is the space of edge functions of the first kind of order k we can expect (suboptimal) convergence \(O(h^{k1})\) as the mesh is refined. The numerical experiments in (Casagrande et al., SAM Report 201432, ETH Zürich, 2014) indicate that this bound is sharp for \(k=1\). Moreover it is shown that the regularization term can be made arbitrarily small without affecting the error in the \(\lvert \cdot \rvert_{\mathbf {curl}}\) seminorm. A numerical experiment shows that the regularization parameter can be chosen in a wide range of values such that, at the same time, the discrete problem remains solvable and the error due to regularization is negligible compared to the discretization error.
Introduction
In this work we study the 3D, magnetostatic boundary value problem,
which can be used to calculate the magnetic field that originates from a divergence free, stationary current \(\mathbf {j}^{i}\). Herein μ denotes the magnetic permeability and \(\mathbf {g}_{D}\) prescribes Dirichlet boundary data. We seek the magnetic vector potential A that fulfills (1)(2). The magnetic field is then \(\mathbf {B}=\nabla\times \mathbf {A}\). Note that if \(g_{D} \equiv0\) on ∂Ω then (2) implies \((\nabla\times \mathbf {A}) \cdot \mathbf {n} = \mathbf {B} \cdot \mathbf {n} = 0\) on ∂Ω which reflects the decay of the fields away from the source.
Note that the solution of the boundary value problem (1)(2) is only unique up to a gradient field (if Ω is simply connected), which is not of importance if one is only interested in the magnetic field B. Thus it is possible to solve the ungauged problem (1)(2) if the current \(\mathbf {j}^{i}\) lies in the range of the system matrix [2]. The latter is hard to enforce on nonconforming meshes (cf. Section 6) and it is simpler to gauge the formulation (1)(2) or to add a regularization term to (1) so that the system matrix has full rank. In case of adding a regularization term to (1), one introduces a modeling error which must not dominate the approximation error of the numerical scheme.
In some applications like the simulation of electric machines or magnetic actuators, magnetic fields have to be computed in the presence of moving, rigid parts. Then one may use separate, moving submeshes for them in order to avoid remeshing. However, this leads to socalled ‘sliding interfaces’, i.e. meshes with hanging nodes (cf. Figure 1).
Our goal is to construct a method that approximates the solution of (1)(2) in a way that is independent of the ‘nonconformity’ of the submeshes at the common interface. This problem has been tackled successfully in the framework of Mortar methods where the continuity constraints are incorporated directly into the trialspace [3] or they are enforced by additional Lagrange Multipliers [4]. However they come at the price of introducing either nonlocal shape functions or additional unknowns.
Another approach uses the Discontinuous Galerkin (DG) framework to solve problem (1)(2) in the presence of hanging nodes. In [5] problem (1)(2) is regularized by adding a \(\nabla( \nabla \cdot \mathbf {A} )\) term to (1) and is then solved by the locally discontinuous Galerkin method. However because of the additional regularization term, additional assumptions on the smoothness of the solution have to be made to prove convergence. Alternatively one can use a mixed DG formulation and enforce the gauge condition \(\nabla\cdot(\mu^{1}\mathbf {A}) = 0\) explicitly to avoid the introduction of a regularization term [6]. The stability of this method for arbitrary, sliding meshes remains unclear: In [6] it is proven that the mixed method yields the expected rates of convergence on conforming meshes and the experimental results in [7] show that it also works on adaptively refined meshes with hanging nodes. However, in light of the results in Section 6.1 and in [8] it is not clear that the constant in the infsup condition of [6] is independent of the ‘nonconformity’ of the submeshes at the common interface.
We study a different approach: We apply the Interior Penalty/Nitsche’s Method [9] to the regularized magnetostatic problem,
Here \(0 < \varepsilon\ll1\) is the regularization parameter. In an earlier investigation [1] it was shown experimentally that the Interior Penalty Method solves problem (3)(4) successfully if second order edge functions of the first kind are used. Moreover it was shown that first order edge functions fail to converge to the exact solution as the mesh is refined. In this work we intend to give theoretical explanations of these observations and investigate the error that is introduced by the regularization term in (3).
We start our discussion in Section 2 by introducing Discontinuous Galerkin (DG) notations that were already introduced in [1] and which are needed to state the interior penalty formulation of (3)(4) in Section 3. Section 3 also proves an apriori bound of the total error in terms of the best approximation error for piecewisepolynomial test and trial spaces \(V_{h}\). In Section 4 we analyze the particular case where \(V_{h}\) is the space of kth order edge functions, \(R^{k}\). Combining the results of Sections 3 and 4 we get rates of convergence for the regularized problem (3)(4). Section 5 is devoted to the choice of the local length scale appearing in the Interior Penalty formulation and presents numerical experiments underlining the results of Sections 3, 4. Section 6 discusses the role of the regularization parameter ε and how to choose it. We end our presentation with a short conclusion and outlook in Section 7.
Preliminaries
Before we can introduce the Symmetric Weighted Interior penalty (SWIP) formulation of (3)(4) we give some definitions and notations (cf. [1]):
Subdomains and submeshes:
Let us assume that the domain Ω, on which (3)(4) is posed, is a simply connected polyhedron with Lipschitz boundary. Furthermore we assume Ω to be split into two nonoverlapping subdomains, \(\overline{\Omega_{1}} \cup\overline{\Omega_{2}} = \overline{\Omega}\).
We introduce a sequence of simplical meshes \(\mathcal{T}_{\mathcal{H}} = ( \mathcal{T}_{h} )_{h \in\mathcal{H}}\) on Ω. Here \(\mathcal{H}\) denotes a countable subset of \(\mathbb{R}^{+}\) having 0 as the only accumulation point. For each \(h \in\mathcal{H}\) we let \(\mathcal{T}_{h} \in\mathcal {T}_{\mathcal{H}}\) denote a particular mesh in the sequence \(\mathcal {T}_{\mathcal{H}}\) and we let \(T \in\mathcal{T}_{h}\) be a mesh element (tetrahedron). The meshwidth is defined as \(h= \max_{T \in\mathcal{T}_{h}} h_{T}\), where \(h_{T}\) is the diameter of element T.
We assume that each mesh \(\mathcal{T}_{h}\), which covers Ω, can be split into two conforming, nonoverlapping submeshes, \(\mathcal {T}_{h} = \mathcal{T}_{h,1} \cup\mathcal{T}_{h,2}\), that cover \(\Omega _{1}\) and \(\Omega_{2}\), respectively. As before we define \(\mathcal{T}_{\mathcal{H},1} = (\mathcal {T}_{h,1} )_{h \in\mathcal{H}}\) and \(\mathcal{T}_{\mathcal{H},2} = (\mathcal{T}_{h,2} )_{h \in\mathcal{H}}\).
Furthermore we define \(\mathbb{F}_{T}\) to be the set of the four facets of a tetrahedron T. The intersection of two facets, is called an inner face while the intersection of a facet with the boundary ∂Ω is called a boundary face. Note that facets are always triangular while inner faces are convex polygons with up to six nodes and boundary faces can have virtually any polygonal shape (cf. Figure 2). We denote by \(\mathcal{F}_{h}^{b}\) the set of all boundary faces, by \(\mathcal{F}_{h}^{i}\) the set of all inner faces and define \(\mathcal{F}_{h} = \mathcal{F}_{h}^{b} \cup\mathcal{F}_{h}^{i}\) to be the set of all faces. Furthermore, \(\mathcal{F}_{T}\) stands for the set of all faces which lie on the boundary of element T.
Mesh assumptions:
We assume that the elements are shape regular in the sense of Ciarlet: There is a constant \(\sigma_{\mathrm{max}}\), independent of h, such that for all \(h \in\mathcal{H}\) and for all \(T \in\mathcal{T}_{h}\) we have
where \(\rho_{T}\) is the radius of the largest ball inscribed in T. It is easy to check that this condition is satisfied if two sequences of static submeshes are moved against each other. We will make additional assumptions about the mesh when we discuss choices for the local length scale in Section 5.
Magnetic permeability:
We assume there exists a partition \(P_{\Omega}= \{ \Omega_{i, \mu} \}\) such that each \(\Omega_{i, \mu}\) is a polyhedron and such that the permeability \(\mu>0\) is constant on each \(\Omega _{i,\mu}\). Furthermore the mesh sequence \(\mathcal{T}_{\mathcal{H}}\) is compatible with the partition \(P_{\Omega}\): For each \(\mathcal {T}_{h} \in\mathcal{T}_{\mathcal{H}}\), each element \(T \in\mathcal{T}_{h}\) belongs to exactly one \(\Omega_{i,\mu} \in P_{\Omega}\). I.e. the magnetic permeability is constant on each element but it is allowed to jump element boundaries, and in particular over the nonconforming interface \(\Gamma:= \overline{\Omega}_{1} \cap \overline {\Omega}_{2}\).
Polynomial approximation:
Later on we will seek the discrete solution in the piecewise polynomial space (cf. [10])
where \(\mathcal{T}_{h} \in\mathcal{T}_{\mathcal{H}}\) and \(\mathbb {P}^{k}(T)\) is the usual space of polynomials up to total degree k on mesh element T. \(L^{2}(\Omega)\) is the usual space of square integrable functions on Ω. Note that functions of \(\mathbb{P}^{k}(\mathcal{T}_{h})\) are discontinuous across element boundaries.
Mesh faces, jump and average operators:
For each mesh face F and vector valued function \(\mathbf {A}_{h} \in \mathbb{P}^{k}(\mathcal{T}_{h})^{3}\), we define the tangential jump as follows:
The weighted average is defined similarly:
Here the normal \(\mathbf {n}_{F}\) always points from \(T_{1}\) to \(T_{2}\) and \(\omega_{1}, \omega_{2} \in[0,1]\) such that \(\omega_{1}+\omega_{2} = 1\). Note that the jump and average operators are well defined for all \(\mathbf {p} \in\mathbb{P}^{k}(\mathcal{T}_{h})^{3}\).
The following lemma relates the trace of a polynomial function to its \(L^{2}\) norm on the element (cf. [10], Lemma 1.46):
Lemma 1
(Discrete trace inequality)
Let \(\mathcal{T}_{\mathcal{H}}\) be a sequence of shape regular, possibly nonconforming, simplical meshes. Then for all \(h \in\mathcal{H}\), all \(v_{h} \in\mathbb{P}^{k}(\mathcal {T}_{h})\), and all \(T \in\mathcal{T}_{h}\) we have
Herein \(C_{\mathrm{tr}}\) is independent of T and h but depends on \(\sigma_{\mathrm{max}}\), k.
Function spaces:
We will use the spaces
Herein \(H^{s}(\Omega) = W^{s,2}(\Omega)\) is the Sobolev space of order s with Hölder coefficient \(p=2\). The associated norms and seminorms are
Symmetric Weighted Interior penalty (SWIP) formulation
We chose an arbitrary subspace \(V_{h} \subseteq\mathbb{P}^{k}(\mathcal {T}_{h})^{3}\) as discrete test and trial space, and use integration by parts (cf. [10, 11] for details) to arrive at the SWIP formulation of (3): Find \(\mathbf {A}_{h} \in V_{h}\) subject to
Here,
where η is the penalty parameter. The last four terms of \(a_{h}^{\mathrm{SWIP}}\) are called consistency, symmetry, penalty, regularization term, respectively. For an inner face \(F \in\mathcal{F}^{i}_{h}\), \(F=\partial T_{1} \cap \partial T_{2}\), we chose the weights
If F is a boundary face, \(F\in\mathcal{F}^{b}_{h}\), we choose \(\gamma _{\mu,F} := \mu^{1}\). The term \(a_{F}\) is the local length scale of face F and can be chosen in different ways (e.g. \(a_{F} = \frac {1}{2}(h_{T_{1}} + h_{T_{2}})\) where \(h_{T_{1}}\), \(h_{T_{2}}\) are the diameters of the neighboring elements). For now we assume that there exists a constant \(\varsigma_{2} > 0\) such that for all \(h \in\mathcal{H}\), all \(T \in\mathcal{T}_{h}\), and all \(F \in\mathcal{F}_{T}\):
In Section 5 we will look at concrete choices of \(a_{F}\) and discuss the circumstances under which (11) is fulfilled. It will turn out that depending on the choice of \(a_{F}\) we have to make additional assumptions about the mesh regularity to guarantee (11).
Remark 1
If \(V_{h} \subseteq \mathbf {H}(\mathbf {curl}; \Omega)\), then all inner tangential jumps in (9) will drop out [11], Lemma 3.8, and only jumps at the boundary remain, i.e. we are left with a standard FEM formulation where the inhomogeneous boundary conditions (2) are enforced in a weak sense.
A priori error estimate
In the following we derive an error estimate in the ‘energynorm’ for the variational problem (8).
Regularity of the exact solution:
We assume that the exact solution A of (3)(4) (in the sense of distributions) is such that
Furthermore, we set \(V^{*}_{h} := V^{*} + V_{h}\). Note that, because A and \(\nabla\times \mathbf {A}\) are in \(H^{1}(P_{\Omega})^{3}\) the traces of A and \(\nabla\times \mathbf {A}\) are well defined on the faces of the mesh elements (cf. [10], Remark 1.26). Indeed, let \(T \in\mathcal{T}_{h}\) be a mesh element, then by the multiplicative trace inequality [12], Theorem 1.6.6, \(\lVert \mathbf {A} \rVert_{L^{2}(\partial T)^{3}} < C_{\mathrm{tr}} \lVert \mathbf {A} \rVert_{L^{2}(T)^{3}}^{1/2} \lVert \mathbf {A} \rVert_{H^{1}(T)^{3}}^{1/2}\), and the same estimate holds for \(\nabla\times \mathbf {A}\). Therefore we see that \(a_{h}^{\mathrm{SWIP}} : V^{*}_{h} \times V_{h} \to\mathbb {R}\) is well defined.
In order for the righthand side to be wellposed we assume \(\mathbf {j}^{i} \in L^{2}(\Omega)^{3}\) and \(\mathbf {g}_{D} = L^{2}(\partial\Omega)^{3}\).
We begin the proof of the a priori error estimate by showing that the exact solution A fulfills equation (8):
Lemma 2
(Consistency)
Assume \(\mathbf {A} \in V^{*}\) is the exact solution of (3)(4). Then, for all \({\mathbf {A}'_{h} \in V_{h}}\),
Proof
Since \(\mathbf {A} \in \mathbf {H}(\mathbf {curl},\Omega)\), A is tangentially continuous across all element boundaries (cf. Lemma 3.8 in [11]). Thus all inner jump terms drop out,
Note that the last two sums include only boundary faces. Next we make use of the following identity (which holds for any interior face \(F=\partial T_{1} \cap\partial T_{2}\))
Here \(\{ \mathbf {b} \}_{\overline{\omega}} := (\omega_{2} \mathbf {b}_{1} + \omega_{1} \mathbf {b}_{2} )\) is the skewweighted average and \([ \mathbf {b} ]_{n} := (\mathbf {b}_{1}  \mathbf {b}_{2}) \cdot \mathbf {n}_{F}\) is the normal jump. Let us apply the identity to the second term of (13):
The second term on the righthand side vanishes because A is a solution of the strong formulation (3). Thus \(\mu^{1} \nabla\times \mathbf {A} \in \mathbf {H}(\mathbf {curl}; \Omega)\), which implies that \(\mu^{1} \nabla\times \mathbf {A}\) is tangentially continuous. Note that for all \(F \in\mathcal{F}^{b}_{h}\): \(\{ \mu^{1} \nabla\times \mathbf {A} \}_{\omega} \cdot [ \mathbf {A}'_{h} ]_{T} =  [ (\mu^{1} \nabla \times \mathbf {A} ) \times \mathbf {A}'_{h} ]\cdot \mathbf {n}_{F}\) so we can rearrange the face contributions to the element boundaries,
Now substitute (14) into (13) and use integration by parts on each mesh element [13], Theorem 3.29:
□
Let us introduce the following (semi)norms on the space \(V^{*}_{h}\):
Lemma 3
(Bound on consistency term)
For all \(\mathbf {A}, \mathbf {A}' \in V^{*}_{h}\) there holds
Here \(h_{T} := \max \{ \lVert xy \rVert  x,y \in \overline{T} \}\) is the diameter of mesh element \(T \in\mathcal{T}_{h}\).
Proof
For an arbitrary inner face \(F = \partial T_{1} \cap\partial T_{2}\) we have by the CauchySchwarz (CS) inequality
By using CauchySchwarz again we see that
Substitute this back into (15) to get
Similarly, for a boundary face \(F \in\mathcal{F}^{b}_{h}\) we have
Now use (16)(17) to bound the sum over all faces,
where we have regrouped the face contributions and used that \(a_{F} \leq \varsigma_{2} h_{T}\) in the last step, cf. (11). □
Using Lemma 3 we can finally prove discrete coercivity.
Lemma 4
(Discrete coercivity)
The bilinear form \(a_{\mathrm{h}}^{\mathrm{SWIP}}\) is coercive: For all \(\eta > C_{tr}^{2} \varsigma_{2}\) and all \(h \in\mathcal{H}\) there holds
with \(C_{\mathrm{stab}}=\min ( \frac{\etaC_{\mathrm{tr}}^{2} \varsigma _{2}}{1+\eta}, 1 )\). The constant \(C_{\mathrm{tr}}\) stems from the discrete trace inequality (7) and is independent of h, μ, ε, \(\varsigma_{2}\).
Proof
By definition of \(a_{h}^{\mathrm{SWIP}}\) we have
Now let us give a bound on the second term on the righthand side using Lemma 3,
where we have used the discrete trace inequality (7) componentwise in the last step. Hence,
Now use the inequality \(x^{2}2\beta x y + \eta y^{2} \geq\frac{\eta \beta ^{2}}{1+\eta}(x^{2}+y^{2})\) which holds for arbitrary β, η, x, y as outlined above (it follows from \((\beta x\eta y)^{2}+(x\beta y)^{2} \geq0\)):
Finally, we note that \(C_{\mathrm{stab}}>0\) if \(\eta> C_{\mathrm{tr}}^{2} \varsigma_{2} \) which completes the proof. □
Lemma 5
(Boundedness)
There exists a constant \(C_{\mathrm{bnd}}>0\) independent of h, μ, and ε such that for all \(\mathbf {A} \in V^{*}_{h}\), all \(\mathbf {A}'_{h} \in V_{h}\), all \(h\in\mathcal{H}\)
Proof
We start by splitting the bilinear form \(a_{\mathrm{h}}^{\mathrm{SWIP}}\) into five terms,
We can now bound these terms individually,
□
Finally, we can combine the previous results into one theorem.
Theorem 1
(Error estimate)
Let \(\mathbf {A}\in V^{*}\) be a solution of the strong formulation (3)(4) (in the sense of distributions) and let \(\mathbf {A}_{h} \in V_{h} \subseteq\mathbb {P}^{k}(\mathcal{T}_{h})^{3}\) solve the variational formulation (8). Then there exist constants \(C>0\), \(C_{\eta}>0\), both independent of h, μ, and ε such that for \(\eta> C_{\eta}\)
and the discrete problem (8) is wellposed. The constant \(C_{\eta}\) depends on \(\varsigma_{2}\), k and C depends on η, \(\varsigma_{2}\), k.
This theorem tells us that the total error is bounded by the best approximation error (w.r.t. suitable norms). Note that we didn’t make any assumption on how the submeshes \(\mathcal {T}_{h,1}\) and \(\mathcal{T}_{h,2}\) meet at Γ. In order to get rates of convergence we will have to make additional assumptions about the approximation space \(V_{h}\) and the exact solution A. This will be the topic of Section 4.
Proof of Theorem 1
In this proof C denotes an arbitrary, positive constant that is independent of h, μ, ε, and that may take a different value every time it used. We begin by picking an arbitrary \(\mathbf {v}_{h} \in V_{h}\). Then, by the triangle inequality,
This is almost the statement of Theorem 1. It remains to bound \(\lVert \mathbf {A}_{h}\mathbf {v}_{h} \rVert _{\mathrm{SWIP}}\),
Inserting this bound into (19) (which holds for arbitrary \(v_{h}\)) yields the assertion. Note that the bilinear form \(a_{h}^{\mathrm{SWIP}}\) is coercive (Lemma 4) and bounded (finite dimensions). Thus, LaxMilgram assures the wellposedness of the discrete problem. □
Remark 2
Observe that for \(\varepsilon\to0\) the variational formulation (8) becomes illposed. To see this we observe that the \(\lVert\cdot \rVert _{\mathrm{SWIP}}\) norm ‘becomes’ a seminorm as \(\varepsilon\to0\). In order to study the behavior as \(\varepsilon\to0\) it is thus desirable to state the discrete coercivity (Lemma 4) w.r.t. a norm that does not depend on ε: We use that \(\lVert \mathbf {A}_{h} \rVert_{\mathrm{SWIP}}^{2} \geq \varepsilon \lVert\mu ^{1/2}\mathbf {A}_{h} \rVert_{L^{2}}^{2}\) and thus Lemma (4) can be rewritten as
We see now clearly that the coercivity constant depends linearly on ε, i.e. the discrete problem becomes illposed as \(\varepsilon\to0\).
Rate of convergence for edge functions
In the following we will bound the best approximation error appearing in Theorem 1 for edge functions of the first kind. For this we assume, in addition to (11), that \(a_{F}\) is uniformly bounded from below in the sense that there exists a constant \(\varsigma_{1}\) such that for all \(h \in\mathcal{H}\), all \(T \in\mathcal{T}_{h}\) and all \(F \in\mathcal{F}_{T}\) we have
For the remainder of this section, let us choose \(V_{h}=R^{k}(\Omega_{1}) \oplus R^{k}(\Omega_{2}) \subset\mathbb{P}^{k}(\mathcal{T}_{h})^{3}\) where \(R^{k}\) is the space of kth order edge functions (\(k=1\) are the lowest order, \(H(\mathbf {curl})\) conforming Whitney elements, cf. [13], Eq. (5.32)). Because the submeshes \(\mathcal{T}_{h,1}\), \(\mathcal{T}_{h,2}\) are conforming, the spaces \(R^{k}(\Omega_{1})\), \(R^{k}(\Omega_{2})\) are \(\mathbf {H}(\mathbf {curl})\) conforming. We can thus use the standard projection operator \(\mathbf {r}_{h}\) as it is defined in [13], Section 5.5, for edge functions on \(\Omega_{1}\), \(\Omega_{2}\) to build our global projection operator \(\mathbf {\pi }_{h} : V^{*} \to V_{h}\),
The following theorem then gives an upper bound for the best approximation error of Theorem 1:
Theorem 2
Assume the exact solution of (3)(4) is such that \(\mathbf {A} \in \mathbf {H}(\mathbf {curl},\Omega ) \cap H^{s}(\{\Omega_{1}, \Omega_{2}\})^{3}\), \(\nabla\times \mathbf {A} \in H^{s}(\{\Omega_{1}, \Omega_{2}\})^{3}\) with integer \(1 \leq s \leq k\). Then
Here C depends on μ, \(\varsigma_{1}\), k, ε but not on h. Moreover if \(\varepsilon< 1\), C is independent of ε.
Remark 3
By combining Theorem 2 with Theorem 1 we see that for a sufficiently smooth exact solution A, the total error \(\lVert \mathbf {A}\mathbf {A}_{h} \rVert_{\mathrm{SWIP}} = O(h^{k1})\) if kth order edge functions are used. In comparison to standard FEM on conforming meshes one order of convergence is lost. Theoretically it is possible that there exists another projector \(\tilde{\pi}_{h}\) which would give a better rate of convergence, but numerical experiments show that Theorem 2 is sharp for \(k=1\) (see Section 5).
In order to prove the above theorem we will make use of two Lemmas to bound the face contributions.
Lemma 6
Let \(\mathcal{T}_{\mathcal{H},1}\) be a sequence of shape regular, conforming, simplical meshes of the domain \(\Omega_{1}\). Suppose there exists an integer \(1 \leq s \leq k\) such that \(\mathbf {u} \in H^{s}(\Omega_{1})^{3}\) and \(\nabla\times \mathbf {u} \in H^{s}(\Omega_{1})^{3}\). Then \(\forall T \in\mathcal{T}_{h,1} \in\mathcal{T}_{\mathcal{H},1}\)
where C is independent of \(h_{T}\), T.
For the proof of Lemma 6 we refer the reader to [13], Lemma 5.52 (which is proven elementwise).
Lemma 7
Let \(\mathcal{T}_{\mathcal{H},1}\) be a sequence of shape regular, conforming, simplical meshes of \(\Omega_{1}\). Assume \(\mathbf {u} \in H^{s}(\Omega_{1})^{3}\) for some integer \(1 \leq s \leq k\) and u transforms such that it preserves the divergence, i.e. if \(F : \hat{T} \to T\), \(\hat{\mathbf {u}} \mapsto \mathbf {u}\) is an arbitrary mapping then u transforms as
Then the following estimate holds:
where \(\mathbf {w}_{T} : H^{1}(T)^{3} \to D_{k}\) is the standard (local) interpolation operator for kth order ThomasRaviart elements \(D_{k}\) [13], Section 5.4. The constant C does not depend on \(h_{T}\), T.
Proof
In order to simplify notation we will assume in this proof that \(C>0\) is an arbitrary constant independent of h, T that may take a different value every time it is used. We note that since \(\mathbf {u} \in H^{s}(T)^{3}\), \(\mathbf {w}_{T}\mathbf {u}\) is well defined by [13], Lemma 5.15. Now split the integral over ∂T into its facet contributions,
Since our mesh contains only tetrahedrons we can find for every \(F_{T} \in\mathbb{F}_{T}\) a linear transformation \(\Phi_{T, F_{T}} : \hat{T} \to T\) which maps the reference element T̂ onto the actual element T such that the preimage \(\hat{F}_{T}\) of facet \(F_{T}\) lies in the xy plane of T̂,
where \(\mathbf {B}_{T,F_{T}} \in\mathbb{R}^{3\times3}\). Now using the usual change of variables together with (22) we obtain
Here \((\mathbf {B}_{T,F_{T}} )_{:,i}\) denotes the ith column of \(\mathbf {B}_{T,F_{T}}\), \(\widehat{\mathbf {w}_{T} \mathbf {u}}\) is defined by (22), and we have used that \(\widehat{\mathbf {w}_{T} \mathbf {u}} = \mathbf {w}_{\hat{T}} \hat{\mathbf {u}}\) [13], Lemma 5.22. Now notice that \(\hat{\mathbf {u}}\mathbf {w}_{\hat{T}}\hat{\mathbf {u}} \in H^{s}(\hat{T})^{3}\) and thus we can use the trace inequality [13], Theorem 3.9,
For the next step we note that \(\forall\phi\in\mathbb{P}^{k1}(\hat {T})^{3} \subset D_{k}(\hat{T})\) we have \(\phi= \mathbf {w}_{\hat{T}} \phi \) by the definition of \(\mathbf {w}_{\hat{T}}\) (\(D_{k}\) is the \(\mathbf {H}(\operatorname{div}; \Omega)\) conforming RaviartThomas space, see [13], Section 5.4). Therefore,
where we have used that \(\mathbf {w}_{\hat{T}} : H^{1}(\hat{T})^{3} \to D_{k}\) is a bounded operator, i.e. \(\lVert \mathbf {w}_{\hat{T}} \hat{\mathbf {u}} \rVert_{H^{1}(\hat {T})^{3}} \leq C \lVert\hat{\mathbf {u}} \rVert_{H^{1}(\hat {T})^{3}}\) (cf. Proof of [13], Theorem 5.25). Since ϕ is arbitrary we can use the DenyLions theorem [13], Theorem 5.5,
Finally we have to map \(\lvert\hat{\mathbf {u}} \rvert _{H^{s}(\hat{T})^{3}}\) back to the actual element T. For this observe that using (22),
with \(\lvert\alpha \rvert_{\ell^{1}} = s\) being a multiindex. Therefore,
where we have used [13], Lemma 5.9, in the last step. Now combining (23)(25) gives.
Here we have used [13], Lemma 5.10, together with the fact that the mesh sequence is shape regular. Now summing over all facets \(F_{T} \in\mathbb{F}_{T}\) yields the assertion. □
Using these Lemmas we can finally give a bound for \(\lVert \mathbf {A}\mathbf {\pi }_{h} \mathbf {A} \rVert_{{\mathrm{SWIP}},*}\).
Proof of Theorem 2
In order to simplify notation, C denotes in this proof an arbitrary, positive constant that is independent of h. Note that the interpolation operator \(\mathbf {r}_{h} ( \mathbf {A} \vert _{\Omega_{1}} )\) is well defined for \(s\geq1\) by the Sobolev Embedding Theorem and [13], Lemma 5.38. Because the submeshes of \(\Omega_{1}\), \(\Omega_{2}\) are conforming themselves, \(\mathbf {r}_{h} ( \mathbf {A} \vert _{\Omega_{1}} )\) is tangentially continuous across all inner, conforming faces. The same holds for \(\Omega_{2}\) and because \(\mathbf {A} \in \mathbf {H}(\mathbf {curl};\Omega )\) the exact solution is also tangentially continuous across all inner faces. Therefore only jump terms across the faces \(F \in\mathcal {F}^{\Gamma,b}_{h} := \mathcal{F}^{b}_{h} \cup \{ F \in\mathcal{F}^{i}_{h}  F \subset\overline{\Omega}_{1} \cap\overline{\Omega}_{2}\}\) remain in the definition of the jump seminorm \(\lvert\cdot \rvert_{j,\mu}\), i.e. we have to bound
Since μ is piecewise constant on each \(\Omega_{i,\mu} \in P_{\Omega }\) there are constants \(\mu_{\mathrm{min}}\), \(\mu_{\mathrm{max}}\) such that \(0 < \mu_{\mathrm{min}} \leq\mu\leq\mu_{\mathrm{max}}\). \(T_{1}\) and \(T_{2}\) are easily bounded using [13], Theorem 5.41:
The term \(T_{3}\) is bounded using Lemma 6,
Here we have used that \(a_{F} \geq\varsigma_{1} h_{T}\).
In order to bound the term \(T_{4}\) we first note that the global ThomasRaviart interpolation operator \(\mathbf {w}_{h} ( \nabla \times \mathbf {A} \vert _{\Omega_{i}} )_{i=1,2}\) is well defined by [13], Lemma 5.15. Thus, \(\nabla\times [ \mathbf {r}_{h} ( \mathbf {A} \vert _{\Omega_{i}} ) ] = \mathbf {w}_{h} ( \nabla\times \mathbf {A} \vert _{\Omega_{i}} )\) by [13], Lemma 5.40, and we can bound \(T_{4}\) as follows:
where we have used Lemma 7 and the fact that \(h_{T} \leq h\). □
Remark 4
From the proof of Theorem 2 it is clear that for h sufficiently small the term \(T_{3}\) dominates the other three terms and is thus responsible for the loss of one order of convergence as pointed out in Remark 3. Interestingly \(T_{3}\) sums the jump terms only over the faces \(\mathcal {F}^{b, \Gamma}_{h}\). This suggests that it suffices to use \((k+1)\)th order edge functions in elements adjacent to \(\mathcal{F}^{b,\Gamma}\) and kth order edge functions everywhere else to achieve \(O(h^{k})\) order convergence. This can be implemented easily by using a hierarchical basis for the edge functions [14].
The local length scale \(a_{F}\) and hconvergence
So far we have assumed that the local length scale \(a_{F}\) fulfills (11), (21) in order to derive an apriori error estimator, i.e. \(0 < \varsigma_{1} h_{T} \leq a_{F} \leq\varsigma_{2} h_{T}\). We will now study the following three concrete choices for \(a_{F}\):

\(a_{F}^{(1)} := \frac{1}{2}(h_{T_{1}} + h_{T_{2}})\) if \(F \in\mathcal {F}^{i}_{h}\) and \(a_{F}^{(1)}=h_{T}\) for \(F \in\mathcal{F}^{b}_{h}\), see [10], Remark 4.6,

\(a_{F}^{(2)} := \min(h_{T_{1}}, h_{T_{2}})\) if \(F \in\mathcal{F}^{i}_{h}\) and \(a_{F}^{(2)}=h_{T}\) for \(F \in\mathcal{F}^{b}_{h}\), see [6],

\(a_{F}^{(3)} := h_{F}\) if \(F \in\mathcal{F}_{h}\), see [9, 10],
where \(h_{T_{1}}\), \(h_{T_{2}}\) are the diameters of the adjacent elements of face F and \(h_{F}\) is the diameter of face F. It turns out that for each choice of \(a_{F}\) we have to make additional assumptions on the mesh such that \(a_{F}\) fulfills (11), (21). So once we have chosen a concrete \(a_{F}\) we can think of \(\varsigma_{1}\), \(\varsigma_{2}\) as mesh dependent parameters. The important point is that the constants C in Theorems 1 and 2 depend on the constants \(\sigma_{\mathrm{max}}\), \(\varsigma_{1}\), \(\varsigma_{2}\) but they do not depend in any other way on the shape of the underlying meshes. Hence, if we can show that \(\sigma_{\mathrm{max}}\), \(\varsigma_{1}\), \(\varsigma_{2}\) are independent of the way that \(\mathcal{T}_{\mathcal {H},1}\), \(\mathcal{T}_{\mathcal{H},2}\) intersect at the sliding interface Γ, then there must be an upper bound on the total error \(\lVert \mathbf {A}\mathbf {A}_{h} \rVert\) that is independent of the relative position of \(\mathcal{T}_{\mathcal{H},1}\) to \(\mathcal{T}_{\mathcal {H},2}\) and that tends to 0 as \(h \to0\).
Let us now discuss the precise conditions on the mesh for each choice of \(a_{F}\): For \(a_{F}^{(1)}\), \(a_{F}^{(2)}\) we require \(\mathcal{T}_{\mathcal{H}}\) to be quasiuniform at the sliding interface Γ:
Definition 1
A meshsequence \(\mathcal{T}_{\mathcal{H}}\) is said to be quasiuniform at Γ if it is shape regular (5) and if there exists a constant \(\sigma_{1}>0\) such that for all \(h \in\mathcal{H}\), all \(T \in\mathcal{T}^{\Gamma}_{h} := \{ T \in\mathcal{T}_{h}  \partial T \cap\Gamma\neq \emptyset \}\):
Lemma 8
If the mesh is quasiuniform at Γ then \(a_{F}^{(1)}\), \(a_{F}^{(2)}\) fulfill conditions (11), (21) and the constants \(\sigma_{\mathrm{max}}\), \(\varsigma_{1}\), \(\varsigma_{2}\) are independent of the way \(\mathcal{T}_{\mathcal{H},1}\), \(\mathcal {T}_{\mathcal{H},2}\) intersect at Γ.
Proof
\(a_{F}^{(1)} \geq\frac{1}{2} h_{T_{i}}\) follows immediately from the definition for \(i=1,2\). For the other direction we use (27) and get \(a_{F}^{(1)} \leq\frac{1}{2}(1 + \sigma_{1}^{1}) h_{T_{i}}\). Moreover, \(\sigma_{1} h_{T_{i}} \leq a_{F}^{(2)} \leq h_{T_{i}}\). □
The lemma above asserts that the choices \(a_{F}^{(1)}\), \(a_{F}^{(2)}\) lead to a method that converges independently of the way that the two meshsequences \(\mathcal{T}_{\mathcal{H},1}\), \(\mathcal{T}_{\mathcal {H},2}\) intersect at Γ. In particular the faces can be very tiny ‘slivers’ (i.e. triangles with high aspect ratio). But note that the choice of \(a_{F}\) determines the required minimum value of the penalty parameter (see Lemma 4).
By substituting \(a_{F}^{(3)}\) into (21) we see that we need an estimate of the form \(h_{F} \geq\varsigma_{1} h_{T}\) in order for Theorem 2 to hold. However if two meshes are sliding against each other such an estimate is not feasible since \(h_{F}\) can become arbitrarily small in comparison to \(h_{T}\). In other words, the constant \(\varsigma_{1}\) depends on the way \(\mathcal{T}_{\mathcal{H},1}\) intersects with \(\mathcal{T}_{\mathcal{H},2}\). Nevertheless using \(a_{F}^{(3)}\) in the variational formulation 8 seems to work in practice (see below).
Numerical examples
We study the behavior of the SWIP formulation for the three different choices of the local length scale \(a_{F}\); As in [1] we consider a 3D sphere with radius 1 that is split into two hemispheres, \(\Omega_{1}\) and \(\Omega_{2}\) (cf. Figure 3). For each hemisphere we create a sequence of quasiuniform meshes, \(\mathcal{T}_{\mathcal{H},1}\) and \(\mathcal{T}_{\mathcal{H},2}\), which approximate the boundary linearly. We impose the analytical solution \(\mathbf {A} = (\sin y, \cos z, \sin x)\) and choose \(\mathbf {j}^{i}\), \(\mathbf {g}_{D}\) such that they fulfill (3)(4).
Figure 4 shows the error in the curlseminorm for different angles of rotation, for all three choices of \(a_{F}\), and for different meshsizes h. We can see that although the error depends slightly on the angle, it converges to zero in all three formulations as h is decreasing. Moreover we see that the choices \(a_{F}^{(1)}\), \(a_{F}^{(2)}\) yield similar results which are slightly better than the choice \(a_{F}^{(3)}\).
In order to illustrate the estimates of Theorems 1 and 2 we plot the error for a series of quasiuniform meshes in Figure 5 for \(a_{F} = a_{F}^{(3)}\).^{Footnote 1} As in [1] no convergence is observed when first order edge functions (\(k=1\)) are employed which implies that Theorem 2 is sharp for \(k=1\). For \(k=2\) and \(k=3\) we observe rates of convergence \(O(h^{1.5})\) and \(O(h^{2.7})\), respectively, which affirms Theorems 1 and 2.
For \(V_{h} = P^{1}(\mathcal{T}_{h})^{3}\) we observe the rate of convergence \(O(h)\), i.e. there is no loss of one order of accuracy. This is because \(\mathbb{P}^{1}(\mathcal{T}_{h})^{3}\) spans the full polynomial space (see [11] for a proof).
Remark 5
The observed rates for \(k=2\) and \(k=3\) are higher than the rates \(O(h)\), \(O(h^{2})\) which we expect from Theorem 2. This is due to the better approximation properties of the edge functions in the inside of the two hemispheres (cf. Remark 4).
Remark 6
Strictly speaking this numerical experiment does not fit the framework developed in Sections 3, 4 because Ω is not a polyhedron. Extending the theory to domains with curved boundary ∂Ω is beyond the scope of this paper but Figure 5 suggests that the order of convergence for \(k=2\), \(k=3\) is the same as for polyhedral domains.
The regularization parameter ε
So far we have looked at the regularized system (3)(4) and it was shown that the proposed method yields the expected rates of convergence for \(\varepsilon> 0\). However, genuine magnetostatics amounts to choosing \(\varepsilon= 0\). We will consider two approaches to solve the system (3)(4) with \(\varepsilon=0\): On the one hand we will try to set \(\varepsilon=0\) directly and on the other hand we will study the effect of choosing ε small enough such that the error due to regularization is negligible.
The case \(\varepsilon=0\)
If we set \(\varepsilon=0\), the boundary value problem (3)(4) does not have a unique solution. Indeed the continuous curlcurl operator has an infinitedimensional kernel and the nonzero eigenvalues are well separated from 0 [13], Corollary 4.8. If one uses \(\mathbf {H}(\mathbf {curl})\) conforming edge functions of the first kind on a conforming mesh it can be shown that the discrete curlcurl operator has a (finitedimensional) kernel and that the discrete eigenvalues are well separated from it [13], Discrete Friedrichs inequality, Lemma 7.20, i.e. edge functions of the first kind yield a spectrally accurate discretization of the curlcurl operator. From a theoretical point of view it remains unclear whether this property carries over to the SWIP formulation (8), cf. [15].
Therefore the spectrum of the \(a_{\mathrm{h}}^{\mathrm{SWIP}}\) bilinear form is investigated in a numerical experiment. The setup is very similar to the one in the previous section: The domain Ω consists of two halfspheres which can be rotated against each other by an angle θ. However this time we only assemble the matrix of the \(a_{\mathrm{h}}^{\mathrm{SWIP}}\) bilinear form with \(\varepsilon=0\), \(a_{F} = a_{F}^{(3)}\) ^{Footnote 2} and compute its eigenvalues using the eig routine of MATLAB R2013a.
Figure 6 shows the smallest and largest nonzero eigenvalues of the SWIP formulation for different meshwidths h and different angles θ (dashed, blue lines) (an eigenvalue has been classified as nonzero if its absolute value is greater than 10^{−12}). For comparison we have also plotted the eigenvalues of a standard \(\mathbf {H}(\mathbf {curl})\) conforming discretization using second order edge functions on the conforming grid with \(\theta=0\) (green lines).
We see that the bandwidth of the SWIP eigenvalues is comparable to the bandwidth of the \(\mathbf {H}(\mathbf {curl})\) conforming discretization for many angles. But we also observe that for some angles the lower end of the spectrum tends to zero. In order to better understand this phenomena we plotted the smallest/largest nonzero eigenvalues of the SWIP discretization against θ for one meshsize (Figure 7). We now see that the lower end of the spectrum deteriorates as \(\theta \to0\), i.e. we can expect spectral pollution for very small angles. This agrees with the observations of [8].
The previous considerations indicate that the \(a_{\mathrm{h}}^{\mathrm{SWIP}}\) bilinear form is not suitable to solve the Maxwell Eigenvalue Problem. However in this work we are concerned with the curlcurl source problem (3)(4). Although the Galerkin matrix becomes singular for \(\varepsilon=0\) we can in principle still solve the linear system if it is consistent, i.e. if the righthand side lies in the range of the Galerkin matrix. Then the solution \(\mathbf {A}_{h}\) is not unique anymore, but \(\mathbf {curl}{\mathbf {A}_{h}}\) is.
We attempt to solve the linear system of equations using the conjugate gradient (CG) method [2]. In [16] it is shown that the CG method converges for consistent, symmetric positive semidefinite problems and that its rate of convergence is determined by the nonzero eigenvalues. In particular, the number of CG iterations is related to the generalized condition number \(\kappa= \frac{\lambda_{\mathrm{max}}}{\lambda_{{\mathrm{min}}}}\) where \(\lambda_{\mathrm{min}}\) is the smallest, nonzero eigenvalue of the system matrix. If again we take a look at Figure 7 it becomes clear that \(\kappa\to\infty\) as \(\theta\to0\). I.e. the number of CG iterations should increase as \(\theta\to0\).
This has been confirmed in a numerical experiment: We take the example from Section 5 with the same analytical solution and chose the righthand side \(\mathbf {j}^{i} = \nabla \times(\nabla\times \mathbf {A})\) (\(\varepsilon= 0\), \(\mu\equiv1\)). Table 1 provides the number of CG iterations required to reach the prescribed tolerance 10^{−6}. We see that without a preconditioner the computational cost for the angle \(\theta=10^{6}\) is almost 6 times larger than for \(\theta= 10^{1}\). For comparison we also list the number of iterations needed when the multilevel ILU decomposition ILUPACK is employed^{Footnote 3} [17]. In this case the number of iterations also increases but the factor 6 is reduced to ≈3.15.
Remark 7
Although the righthand side \(\mathbf {j}^{i}\) chosen in the numerical experiment above is clearly divergence free, there is no guarantee that its discrete counterpart \(\ell_{h}\) is so too, i.e. it is not clear that the righthand side vector b, that is associated with \(\ell_{h}\), lies in the range of the system matrix. We have investigated this by splitting the righthand side vector b into a part that lies in the kernel of the system matrix, \(\tilde {\mathbf {b}}\), and into it’s orthogonal complement, \(\tilde{\mathbf {b}}^{\perp}\). It turned out that for all angles \(\lVert\tilde{\mathbf {b}}^{\perp}\rVert_{2} / \lVert \mathbf {b} \rVert_{2} \approx 10^{9}\), which seems to be sufficient for CG to converge.
We can conclude that setting \(\varepsilon= 0\) is in principal possible if the right hand side vector b lies in the range of the system matrix. However checking this for nonzero right hand sides \(\mathbf {j}^{i}\) is a nontrivial task because we don’t know apriori the kernel of the system matrix \(a_{h}^{\mathrm{SWIP}}\). Moreover the system matrix becomes illconditioned as the angle \(\theta \to0\) which causes an increase in the number of CG iterations.
Remark 8
For \(\mathbf {H}(\mathbf {curl})\) conforming discretizations, which fulfill the discrete sequence property, the kernel of the system matrix is known. In particular it is easily proven that \(\operatorname {div}\mathbf {j}^{i} = 0\) implies that \(\ell_{h}\) lies in the range of the system matrix. Unfortunately it is not clear whether this property carries over to the SWIP formulation (8) because to the best of our knowledge there exists no characterization of the kernel of \(a_{h}^{\mathrm{SWIP}}\) on arbitrarily nonconforming meshes.
The case \(0<\varepsilon\ll1\)
We saw in the previous section that setting \(\varepsilon=0\) is in practice not feasible. Therefore we study a different approach: We choose ε so small that the error due to regularization becomes negligible. To make this more explicit we bound the total error between the discrete, regularized solution \(\mathbf {A}_{h}^{\varepsilon}\) and the exact solution of (3)(4) with \(\varepsilon= 0\), \(\mathbf {A}^{0}\), by two contributions:
herein \(\mathbf {A}^{\varepsilon}\) is the exact solution of the regularized system (3)(4). Clearly the second component is independent of the discretization and thus h, but it depends on ε for a given problem. Moreover, the first term depends on h but is independent of ε because the constant C of Theorems 1 and 2 is independent of ε.
It is thus desirable to choose ε small such that \(\lVert\nabla \times(\mathbf {A}^{\varepsilon} \mathbf {A}^{0}) \rVert _{L^{2}(\Omega)^{3}} \ll\lVert \nabla\times({\mathbf {A}_{h}^{\varepsilon} \mathbf {A}^{\varepsilon}})\rVert _{L^{2}(\Omega)^{3}}\). However, as \(\varepsilon\to0\) the discrete problem becomes illposed and solvers typically fail to converge, cf. Remark 2, Section 6.1.
We try to circumvent this problem by two approaches:

For small problems we use the Sparse Cholesky Decomposition of PARDISO [18] (Intel MKL Version 11.2) and solve the linear system of equations directly.

For problems whose Cholesky Decomposition does not fit into memory we use the Conjugate Gradient Method together with ILUPACK [17] as a preconditioner (using the settings of Section 6.1).
Remark 9
We are only interested in the curl of the solution, i.e. the magnetic field B. If we were to look at A instead of \(\nabla\times \mathbf {A}\) then \(\lVert \mathbf {A}^{\varepsilon}_{h}  \mathbf {A}^{\varepsilon}\rVert_{L^{2}(\Omega)^{3}}\) would not be independent of ε as can be seen from Theorem 1.
The following Lemma gives us a guideline for choosing ε.
Lemma 9
If we impose homogeneous Dirichlet data, \(\mathbf {g}_{D} \equiv0\), we have,
where α is the smallest, nonzero eigenvalue of \(\nabla\times (\mu^{1} \nabla\times \mathbf {A} ) = \alpha \mathbf {A}\). If μ is constant in Ω we can choose \(\alpha= \sqrt{2}\pi /l_{\mathrm{max}}\) where \(l_{\mathrm{max}}\) is the maximum side length of a cuboid that contains Ω.
The first part of this lemma is proven in [19], Lemma 2.1, and for the second part we use a result of [20], Section 4.
By comparing (29) with (28) we see that \(\lVert\mu^{1} \nabla\times(\mathbf {A}^{\varepsilon}_{h}  \mathbf {A}^{\varepsilon}) \rVert_{L^{2}(\Omega)^{3}} = O(h^{k1})\) for k order edge functions. Therefore ε should be chosen such that \(\varepsilon\sim h^{k1}\) as the mesh is refined.
Numerical example:
We consider the same setup as in the previous section (cf. Figure 3) with but we choose a different μ for the upper and lower hemisphere. The analytic solution is chosen as \(\mathbf {A}^{0} = (\sin y, 0, \mu\sin x)\). \(\mathbf {j}^{i}\) is chosen such that \(\mathbf {A}^{0}\) fulfills (3)(4) with \(\varepsilon = 0\).
We solve the system of linear equations using PARDISO for different values of ε and μ (as in the previous section we choose \(a_{F} = a_{F}^{(3)}\)). Figure 8 shows the total relative error \(\lVert\mu ^{1/2} \nabla\times(\mathbf {A}_{h}^{\varepsilon}\mathbf {A}^{0}) \rVert_{L^{2}(\Omega )^{3}}\) as a function of ε for various meshsizes. The solid lines show the error for \(\mu_{\mathrm{upper}}/\mu_{\mathrm{lower}}=10^{2}\) whereas the dashed lines show it for \(\mu_{\mathrm{upper}}/\mu_{\mathrm{lower}}=10^{7}\).
We note that the errors are almost identical for both choices of μ. Moreover, we observe that for \(\varepsilon< 10^{3}\) the discretization error \(\lVert\mu^{1/2}\nabla\times(\mathbf {A}_{h}^{\varepsilon} \mathbf {A}^{\varepsilon}) \rVert_{L^{2}(\Omega)}\) (which here includes the error due to boundary approximation, cf. Remark 6) clearly dominates the regularization error \(\lVert\mu^{1/2}\nabla \times(\mathbf {A}^{\varepsilon} \mathbf {A}^{0}) \rVert_{L^{2}(\Omega)}\) whereas for \(\varepsilon> 10^{3}\) the regularization error is dominated by the discretization error. This is what we can expect from the previous discussion. In fact, if we use Lemma 9 and fit the two hemispheres into a cube of side length \(l_{\mathrm{max}}=2\), we get \(\alpha= \pi^{2}/2\). The black, dashed line in Figure 8 visualizes the corresponding estimate (29) and we see that for ε large the behavior is clearly linear, as proven in Lemma 9, and that the estimate is valid even though \(\mathbf {g}_{D} \neq0\) and μ is not constant.
Remark 10
The same results are obtained if CG together with ILUPACK is used. For brevity we omit these results here.
We would like to point out that by using the direct solver PARDISO we were able to solve the resulting system of linear equations for ε as small as 10^{−10} and that the time needed to solve the problem seems to be independent of ε (see Table 2). A similar result holds for preconditioned CG with ILUPACK preconditioner where the system is solvable for arbitrary small ε (cf. Section 6.1) and the solution time seems to be independent of ε for ε small enough.
We can thus choose ε (almost) arbitrarily small without affecting the discretization error \(\lVert\mu^{1/2}\nabla \times(\mathbf {A}_{h}^{\varepsilon} \mathbf {A}^{\varepsilon}) \rVert _{L^{2}(\Omega)}\) and incurring rising cost for solving the resulting linear systems of equations. In other words, one should choose ε as small as possible such that the resulting linear system can still be solved.
Conclusion and outlook
We have proved apriori error estimators for the interior penalty formulation of the regularized curlcurl source problem (3)(4); If the solution is approximated by kth order edge functions we can expect at least convergence of order \(O(h^{k1})\) (provided the exact solution is sufficiently smooth). In particular, for \(k=1\) no convergence was observed in a numerical experiment [1], which implies that our result is sharp. The reason for this is that \(R^{k}\) does not span the full polynomial space \(\mathbb{P}^{k}\).
The bounds require the mesh to be quasiuniform at the sliding interface but do not make any assumptions on how the submeshes abut at the sliding interface nor does the error estimate depend on it. This is confirmed by the numerical experiments and it is observed that the approximation is stable independent of the way the submeshes intersect.
Moreover the role of the regularization parameter ε has been investigated; For practical purposes one can choose ε (almost) arbitrarily small and solve the discrete problem with a direct solver or by using the preconditioned conjugate gradient method. The error due to regularization is then dominated by the discretization error of the regularized problem and is negligible.
Outlook:
The proof of Theorem 2 suggests that it suffices to use 2nd order edge functions solely in elements adjacent to the nonconforming interface, respectively boundary faces, to achieve \(O(h)\) convergence. This would reduce the required number of unknowns drastically and should be pursued for practical applications.
Notes
 1.
Based on the results in Figure 4 we can expect similar behavior for other choices of \(a_{F}\).
 2.
The choices \(a_{F}^{(1)}\) and \(a_{F}^{(2)}\) yield qualitatively the same results. In particular the smallest nonzero eigenvalues also tend to 0 as \(\varepsilon\to0\), cf. Figure 7.
 3.
The ILU factorization is built from the system matrix with \(\varepsilon=10^{6}\) and the parameters for ILUPACK are: type sol = 0, partitioning=3, flags=1,1, inv. droptol=5, threshold ILU=0.1, condest=1e2, residual tol. = 5e6.
References
 1.
Casagrande R, Winkelmann C, Hiptmair R, Ostrowski J. DG treatment of nonconforming interfaces in 3D curlcurl problems. SAM report 201440, ETH Zurich; 2014. https://www.sam.math.ethz.ch/sam_reports/reports_final/reports2014/201440.pdf. Accessed 5 Jan 2016.
 2.
Bebendorf M, Ostrowski J. Parallel hierarchical matrix preconditioners for the curlcurl operator. J Comput Math. 2009;27:62441.
 3.
Rapetti F, Maday Y, Bouillault F, Razek A. Eddycurrent calculations in threedimensional moving structures. IEEE Trans Magn. 2002;38(2):6136.
 4.
Wohlmuth I. Discretization methods and iterative solvers based on domain decomposition. Berlin: Springer; 2001.
 5.
Perugia I, Schötzau D. The hplocal discontinuous Galerkin method for lowfrequency timeharmonic Maxwell equations. Math Comput. 2003;72:1179214.
 6.
Houston P, Perugia I, Schötzau D. Mixed discontinuous Galerkin approximation of the Maxwell operator: nonstabilized formulation. J Sci Comput. 2005;22/23:31546.
 7.
Houston P, Perugia I, Schötzau D. Nonconforming mixed finiteelement approximations to timeharmonic eddy current problems. IEEE Trans Magn. 2004;40(2):126873.
 8.
Buffa A, Houston P, Perugia I. Discontinuous Galerkin computation of the Maxwell eigenvalues on simplicial meshes. J Comput Appl Math. 2007;204(2):31733.
 9.
Stenberg R. Mortaring by a method of J. A. Nitsche. In: Computational mechanics; 1998.
 10.
Di Pietro DA, Ern A. Mathematical aspects of discontinuous Galerkin methods. Berlin: Springer; 2012.
 11.
Casagrande R. Sliding interfaces for eddy current simulations. Master’s thesis, ETH Zürich; 2013.
 12.
Brenner SC, Scott LR. The mathematical theory of finite element methods. 3rd ed. Berlin: Springer; 2008.
 13.
Monk P. Finite element methods for Maxwell’s equations. New York: Oxford University Press; 2003.
 14.
Bergot M, Duruflé M. Highorder optimal edge elements for pyramids, prisms and hexahedra. J Comput Phys. 2013;232:189213.
 15.
Buffa A, Perugia I. Discontinuous Galerkin approximation of the Maxwell eigenproblem. SIAM J Numer Anal. 2006;44(5):2198226.
 16.
Kaasschieter EF. Preconditioned conjugate gradients for solving singular systems. J Comput Appl Math. 1988;24:26575.
 17.
Bollhöfer M, Saad Y. Multilevel preconditioners constructed from inversebased ILUs. SIAM J Sci Comput. 2006;27(5):162750.
 18.
Schenk O, Gärtner K. Solving unsymmetric sparse systems of linear equations with PARDISO. In: Sloot PMA, Tan CJK, Dongarra JJ, Hoekstra AG, editors. Computational science  ICCS 2002, part II; 2002. p. 35563.
 19.
Reitzinger S, Schöberl J. An algebraic multigrid method for finite element discretizations with edge elements. Numer Linear Algebra Appl. 2002;9:22338.
 20.
Costabel M, Dauge M. Maxwell eigenmodes in tensor product domains (2006). https://perso.univrennes1.fr/monique.dauge/publis/CoDa06MaxTens2.pdf. Accessed 18 Apr 2016.
Acknowledgements
The authors acknowledge the contribution of Christoph Winkelmann to the Finite Element Framework HyDi. This work has been funded by the Comission for Technology and Innovation (CTI), Switzerland, funding application No. 14712.1 PFIWIW and ABB Switzerland Corporate Research.
Author information
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
RH and JO proposed the initial ideas and motivations. Their elaboration (theoretical results and numerical simulations) and drafting the script was done by RC, supported by RH. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Received
Accepted
Published
DOI
MSC
 65N12
 65N30
 78M10
Keywords
 discontinuous Galerkin
 nonconforming meshes
 interior penalty
 electromagnetics
 magnetostatics
 CurlCurl operator