 Research
 Open Access
Nonlinear eigenvalue and frequency response problems in industrial practice
 Volker Mehrmann^{1}Email author and
 Christian Schröder^{1}
https://doi.org/10.1186/2190598317
© Mehrmann, Schröder; licensee Springer 2011
Received: 21 February 2011
Accepted: 27 July 2011
Published: 27 July 2011
Abstract
Background
We discuss the numerical solution of large scale nonlinear eigenvalue problems and frequency response problems that arise in the analysis, simulation and optimization of acoustic fields. We report about the cooperation with the company SFE in Berlin. We present the challenges in the current industrial problems and the stateoftheart of current methods.
Results
The difficulties that arise with current offtheshelf methods are discussed and several industrial examples are presented.
Conclusions
It is documented that industrial cooperation is by no means a oneway street of transfer from academia to industry but the challenges arising in industrial practice also lead to new mathematical questions which actually change the mathematical theory and methods.
Keywords
 Nonlinear eigenvalue problem
 frequency response problem
 complex symmetric linear system
 blockArnoldi method
 acoustic field computation
 automotive industry
 AMS Subject Classification: 65F18
 15A18
1 Background
1.1 Introduction
Traffic noise emissions by transport vehicles, such as cars, trains or airplanes are one of the key factors restricting the quality of life in urban areas. Acoustic waves in dynamically moving vehicles arise from many different sources, such as, for example, the noise of engines or the vibrations of the structure due to external excitations like road contact or head wind. The reduction of such noise emissions is therefore an important factor in the design and the economic success of new products.
To minimize potential noise emissions already in an early design phase of new products (avoiding the costly production of prototypes), it is necessary to use simulation, optimization and control techniques based on mathematical models of the vehicle. Performing these tasks requires mathematical models that describe the acoustic field associated with the structure of the complete vehicle including its interaction with the environment such as, for example, the air in and around the vehicle or the road or rail contact. Furthermore, these models must allow to identify and influence potential noise sources, and as a vision for the future they must allow to minimize the emissions.
Although much research is carried out in universities and industrial research and development departments, the model based minimization of the noise emissions of a complete car or train (not to mention airplane) including the majority of external and internal excitations is still a vision for the future. To achieve this, a joint effort is needed that includes the identification of sources, the construction of adequate mathematical models that incorporate all possible sources for acoustic waves, the analysis of these models, concerning robustness of their descriptions and their potential for optimization techniques, the development of numerical methods for the simulation and optimization of these models, as well as the implementation of these techniques as production software on modern high performance computers for industrial use.
1.1.1 Modeling
The modeling of acoustic fields inside or outside of dynamically moving vehicles typically uses coupled systems of partial differential equations (PDEs), for example, for the generation of noise by vibrating parts, surface contact, engine noise or head wind. These methods are well established in the engineering community using commercial finite element packages [1, 2]. However, the techniques for the resulting systems of PDEs still have many deficiencies, in particular the development of appropriate solvers for the solution of the linear and nonlinear systems and eigenvalue problems that have to be solved after discretization. These turn out to be extremely large and illconditioned  that is, extremely sensible with respect to perturbations in the problemdefining data  when a reasonably fine threedimensional model is used; they may consist of hundreds of millions of equations.
An even greater challenge is to use these models and methods within an optimization loop. There is no chance to use classical offtheshelf optimization methods for these problems, the problem size is just too large. Instead, one currently applies model reduction techniques [3, 4] to approximate the given fine model that is used for the simulation by a rather crude model that is used in the optimization.
To make such an approach viable within a design environment, where not only the geometry and topology of the vehicle and its material parameters are subject to changes, but also the interaction with the environment is rather complicated, it is necessary to develop integrated mathematical techniques for the modeling, simulation, and optimization that make as much use as possible of the properties of the underlying physical model and to transfer this into numerical methods and production codes.
1.1.2 Content of the paper
In this paper we will discuss some of the work and some of the challenges in a longterm cooperation with the company SFE GmbH in Berlin, Germany, which produces software for the simulation and optimization of acoustic fields.
The cooperation involves the development (and implementation on current highperformance computers) of numerical methods for the solution of linear systems and nonlinear eigenvalue problems arising from discretized partial differential equations modeling noise emissions of cars and trains.
The paper is organized as follows.
Section 1.2 briefly describes the modeling of acoustic fields inside a car as coupling of fluid and structure vibrations. In Section 2.1 we study the direct frequency response problem, that is, the numerical solution of a series of large, sparse, complex symmetric linear systems with a varying parameter. We will address in particular incore/outofcore storage methods, preconditioning and parallel execution aspects.
Section 2.2 treats modal reduction to reduce the dimension of the generated models. This requires the computation of all eigenvalues of a large sparse nonlinear eigenvalue problems in a given region of the complex plane.
The described problems and results demonstrate many challenges in the transfer of stateoftheart numerical methods into the industrial practice, and show that there is mutual benefit from such a cooperation between industry and academia.
1.2 Mathematical modeling of interior car acoustics
One of the most difficult tasks in the modeling is the incorporation of appropriate damping and absorption, because this depends in a rather complicated way on the materials used inside the car, the surface geometry of the interior and this is also one of the terms, where the influence of the optimization is strong.
where ${M}_{f}={M}_{f}^{T}$ is a positive definite mass matrix, ${K}_{f}={K}_{f}^{T}$ is a positive definite stiffness matrix, ${D}_{f}(\alpha )$ is a symmetric positive semidefinite damping/absorption matrix, and ${D}_{sf}$ describes the fluid structure coupling.
It should be noted that for the fluid model, the finite element discretization applied to the weak form of the partial differential equation is used. In contrast to this in the model describing the vibration of the structure, in industrial and engineering practice, typically a direct discrete finite element model is employed. This leads to one of the challenges that we will discuss below.
where ${f}_{e}$ is a (discrete) external load and ${f}_{p}$ is the pressure load. Here ${M}_{s}$ is a symmetric positive definite mass matrix, and ${D}_{s}$ is a symmetric positive semidefinite damping matrix; both are real. The matrix ${K}_{s}$ has the form ${K}_{s}={K}_{1}(\omega )+\u0131{K}_{2}$ with real symmetric ${K}_{1}$${K}_{2}$, where ${K}_{1}$ is the positive semidefinite stiffness matrix. It is often frequency dependent to model nonlinear material behavior. The matrix ${K}_{2}$ models hysteretic damping, that is, damping that is proportional to the displacement (instead of the velocity), but is in phase with velocity [6]. Typically, the matrix ${K}_{2}$ is of very small rank.
Although ${M}_{s}$ is positive definite in theory, the matrix encountered in practice is highly singular due to the fact that rotational masses are omitted. On the positive side, it is block diagonal with small blocks.
Typically in this model the format of ${M}_{s}$ is a factor 1,00010,000 larger than the format of ${M}_{f}$, the mass and stiffness matrices are highly dependent on the geometry and topology of the car, and the type of finite elements that are used. Since the structure is essentially modeled with a fine (pretty much uniform) mesh, the matrices have dimensions of several millions. The stiffness matrix depends also on the material parameters, while the damping and coupling matrices depend on geometry, topology and material parameters.
2 Results and discussion
2.1 Direct frequency response
This linear system, which (for a reasonable structure) has several millions of equations, typically has to be solved for a large frequency range $\omega =0,\dots ,1\text{,}000$ Hz in small frequency steps and, depending on the type of excitation, for many right hand sides (load vectors).
Based on the frequency response analysis it is then possible to detect places where the excitation leads to large noise emissions (hot spots), and this approach can be used to improve or even optimize the frequency response behavior within an optimization loop.
Since this system of equations has to be solved for many right hand sides and over a large frequency range, and all this within an optimization loop, a very efficient solver is necessary. This solver has to be able to recycle information from nearby problems (when stepping through the frequency range and modifying parameters in an optimization) and has to work efficiently in a modern multiprocessor, multicore hardware environment. Altogether, this is really a lot to ask for from a linear system solver and makes the use of black box solvers extremely difficult.
2.1.1 Stateoftheart in linear system solvers
Modern techniques for the solution of linear systems in industrial applications are typically a combination of methods, that use the best of each of the various available classes of methods.
The first class of methods are highly efficient direct solution methods that use Gaussian elimination with partial pivoting combined with other techniques from graph theory to achieve optimal performance, make efficient use of the sparsity structure to avoid fillin, and save storage. Many of them are even implemented for the use on multiprocessor machines. Well known packages include UMFPACK [7], PARDISO [8], MUMPS [9], WSMP [10], or the HSL collection (for example, HSL_MA78 [11]) to name a few. In view of possibly many right hand sides they are a clear option, despite the fact that for the size and type of problems considered in industrial practice, the storage demands are so high that they typically can only work in an outofcore setting, that is, the matrix factors are stored on hard disk instead of main memory. Another difficulty is that a new factorization has to be computed for every frequency or every modification in the optimization loop. Finally also the bad scaling and the fact that the problems get increasingly illconditioned for large frequencies presents a real challenge, because the desired solution accuracy may not be realizable. Some of the packages provide specialized routines for complex symmetric systems, which have the potential to half the storage and computational requirements, but may also increase the complexity of pivoting strategies.
The second class of methods are iterative methods of the Krylov subspace type like the generalized minimal residual method (GMRES) [12], the Biconjugate gradient method in its stabilized form (BICGSTAB) [13], or the quasiminimal residual method (QMR) [14]. A variant of the conjugate gradient method for complex symmetric systems like (9) is CSYM [15]. In principle these methods would be very good candidates, since the linear systems are sparse and matrix vector multiplications are relatively cheap. Also, if $\stackrel{\u02c6}{f}$ and K are independent of ω, then one can exploit the shift invariance property of Krylov subspaces [16, 17]. However, often $\stackrel{\u02c6}{f}$ or K do depend on ω and without a good preconditioner the convergence of iterative methods (in particular for large frequencies) is dramatically slow or not realizable.
The third class of potential methods (exploiting the fact that there is a partial differential equation in the background) are multigrid or multilevel methods [18–20]. Unfortunately, despite the fact that they are efficiently used in the solution of wave propagation problems, such as Helmholtz or Maxwell equations [21], they cannot be used in a simple way in the described acoustic field problems, since the discrete FEM modeling of the structure does not provide a nice hierarchy of basis functions. If such a hierarchy was available then these methods would lead to very good preconditioners for the Krylov subspace methods. However, with the current state of discrete FEM modeling (being engineering practice for decades) it is extremely difficult to incorporate them into the current code solvers. This is even true for the algebraic multigrid methods like the RugeStüben method [22] or hypre [23], due to the difficulty of having to solve for a large frequency range, many right hand sides and all this inside an optimization loop. It is needless to say that also the incorporation of these solvers in a multiprocessor, multicore industrial software tool is another challenge.
The fourth class of methods which have made tremendous progress in the last decades are the adaptive finite element methods, see, for example, [24]. These refine the computational grid according to a priori and a posteriori error estimates and if implemented properly avoid globally fine meshes and therefore the extremely large linear systems and eigenvalue problems in the first place. However, the construction, analysis, and implementation of such methods for their use inside an industrial package, including the treatment of a full frequency range and many right hand sides, is still in its infancy and requires a major research effort which fortunately is currently addressed in several research projects worldwide.
When our cooperation with SFE started, we essentially made the above analysis of the available methods and realized the following major obstacles.

The problems are badly scaled and get increasingly illconditioned when ω grows;

for some parameter constellations the system becomes exactly singular with possibly inconsistent right hand sides;

typically there are many right hand sides;

classical offtheshelf iterative methods do not work well, or fail completely;

direct solution methods have to work by storing the factors in outofcore memory and cannot easily recycle information from previous frequency or optimization steps;

no multilevel or adaptive grid refinement is applicable, the methods must be matrix based;

the matrices M, ${K}_{1}$ are often slightly indefinite because of rounding errors during their creation.
2.1.2 What did we do?
With all the obstacles of the problem and all the deficiencies of the current methods, the development and implementation of an appropriate linear solver for acoustic field frequency response problems is a major challenge. Furthermore, as in all industrial projects, there was a deadline. In such a circumstance, the only possibility is to compromise between the optimality of a given method for a given problem, the provability of success, and the practical needs in the industrial environment.
Our compromise, since we had to deal with given matrices (as opposed to partial differential equations), was to develop and implement (in the SFE software environment) a preconditioned subspace recycling Krylov subspace method which has the following features:
For small values of ω up to ca $60\cdot 2\pi $ corresponding to a frequency of 60 Hz, typically only 36 iteration steps per frequency were necessary and the preconditioner could be kept unchanged for the whole frequency range. But the number of iteration steps per frequency grows substantially the larger ω gets, so that more and more new preconditioners have to be computed. For large frequencies close to 1,000 Hz, the preconditioned Krylov method tends to be extremely slow or not convergent at all. As a consequence we constructed a hybrid method, where a certain frequency onwards MUMPS is used as a direct solver for the system with the full matrix $P(\omega )$. Furthermore, it was decided to make use of the multicore environment and to treat several linear systems simultaneously in a distributed fashion. One group of processors solves the systems corresponding to the lower frequencies using the described iterative recycling method. Meanwhile, the other processor groups treat the high frequency systems with a full direct solve for each value of ω.
2.1.3 Evaluation of our approach
Since there are commercial packages available, a question that may arise is why we need new numerical methods and solvers in the first place. From the point of view of the company SFE this is clear, they wanted their own solver in their product, and the solution method including the implementation was satisfactory for their needs. But in many respects the current methods are unsatisfactory from a scientific point of view. The discussed industrial problems present some of the current grant challenges in the field and the commercial packages are by no means able to solve these problems in a completely satisfactory way. To always get a moreorless reasonable solution, and to be efficient, they often compromise for accuracy of the results which is certainly inadequate from a scientific point of view. Thus the cooperation with the industrial partner triggered new research questions for the academic world.
It became clear during the project that the concept of recycling in iterative methods, that is, the reuse of information that was already computed for other frequencies or in the course of an optimization procedure is not wellenough understood. As a consequence, motivated by the project with SFE, a new research project [26] in the DFG Research Center Matheon was started to further investigate the basic mathematical principles and to understand how they can be incorporated into new efficient methods [27].
As a second major obstacle, we identified the use of discrete FEM modeling in structural engineering. It would be much easier if adaptive FEM would be usable for the discussed class of problems, and it would also be good to have a grid hierarchy that allows the use of efficient multilevel preconditioners [18–20]. Despite the high research activity in this field this is a major challenge for the described problem classes and almost no analysis or methods are available. We will discuss this in more detail in the section devoted to eigenvalue computation, but again here we see a need for a stronger cooperation between academia and industry in this area of structural engineering, to transfer new ideas that are developed now into the industrial practice.
2.2 Modal reduction
with M D K as in (9). The methods discussed below assume that P is quadratic in λ for the eigenvalue computation, that is, the nonlinear dependency of K on the frequency is ignored. One easily verifies that if ${\lambda}_{0}$ is an eigenvalue of $Q(\lambda )$, that is, $P({\lambda}_{0})x=0$, then ${x}^{T}$ is the corresponding left eigenvector but essentially no other general properties of the eigenvalue problem are available, in particular, there is unfortunately no immediate variational property for the eigenvalues/eigenfunctions available, as there is for the undamped case [28].
Since one is interested only in part of the spectrum, a natural idea is to identify a space associated with eigenvectors and generalized eigenvectors associated with the important eigenvalues, and to project the problem into this subspace. This is a model reduction approach called modal reduction which, if efficiently implemented, can save a huge amount of computing time and storage, despite the fact that it is partially heuristic.
2.2.1 Complex symmetric quadratic eigenvalue problems
and would still be complex symmetric. In the context of model reduction, the requirements for the reduced model are that the projected system is a good approximation to the large scale system for a large frequency range, and also for a large set of parameter variations. Furthermore, it has to be of small enough dimension, so that classical methods for nonlinear nonsmooth optimization can be applied. Again this is a lot to ask, since currently for large scale problems there are really no methods available that guarantee to obtain all the eigenvalues of (10) in a specified region of the complex plane. For small dense problems one could employ the sign function method, [29, 30] but this would require storing full dense inverses of matrices of the given class, which is certainly not possible in the described acoustic field problems.
The currently used industrial techniques typically solve a simplified problem, for example, the eigenvalues and associated eigenvectors in the desired region for the undamped/uncoupled problem are used for the projection or an algebraic multilevel substructuring method (AMLS) is used that exploits the structure of the matrices as they arise form the discrete FEM [31]. All these techniques are partially heuristic, since there is no guarantee that all the desired eigenvalues are captured or that the eigenvalues from the projected system are close to those of the original physical system or of the FEM discretized system, that is, in general, currently no error bounds are available. To generate such error bounds is a major challenge for the academic community and research in this direction has started in the project [32] with first results in [33, 34].
Furthermore, the numerical solution of the large scale nonlinear eigenvalue problem (10) itself also presents a mathematical challenge, for several reasons.
First of all the mass matrix is singular, so the problem has eigenvalues at ∞ which often cause convergence problems for the iterative methods, second of all, there are also several (typically six) eigenvalues at 0, corresponding to the six free degrees of motion, three each for translation and rotation, of the whole structure. In industrial practice the eigensolver is furthermore also often used for model verification, by checking whether there are exactly 6 eigenvalues at 0. Then, if the model is flawed, the singularity may be even higher.
The major challenge, however, is that we want the eigenvalues near 0 and it is well known that classical iterative methods for large sparse eigenvalue problems, like the implicitly restarted Arnoldi method [35], typically converge fast only to the eigenvalues at the periphery of the spectrum [36]. Thus, either a shiftandinvert technique that transforms the problem and maps the desired eigenvalue to the periphery is necessary or other techniques like Newton’s method [37, 38] or the JacobiDavidson method [39, 40] have to be used. All these methods require the solution of linear systems of the form $({\stackrel{\u02c6}{\lambda}}^{2}M+\stackrel{\u02c6}{\lambda}D+K)x=b$ with a given shiftpoint $\stackrel{\u02c6}{\lambda}$. But this is the problem that we wanted to avoid in the first place and hence a vicious circle is closed.
2.2.2 What did we do?
Having assessed the various options and their potential advantages and drawbacks, we decided to develop a new method based on the following concepts and to implement it into the SFE environment.
First of all, since the methods that work directly on the quadratic eigenvalue problems, for example, [28, 41, 42], are not yet as mature as those for linear eigenvalue problems, the problem is linearized by introducing a new variable λx and turning (10) from a quadratic into a linear eigenvalue problem.
This is done by the block Arnoldi method [43] which requires the application of the matrix $\mathcal{A}$ to given blocks of vectors during the generation of the Krylov subspace. This involves a solution of a linear system with the complex symmetric matrix $Q(\sigma )$. We again use the direct solver MUMPS to compute a complex $LD{L}^{T}$ factorization.
which is known to contain, for increasing m, increasingly good approximations to eigenvectors corresponding to eigenvalues of $\mathcal{A}$ of maximum absolute value  which correspond to the eigenvalues of $Q(\lambda )$ closest to σ. In the classical Arnoldi method, the starting block B is a vector. Otherwise, when B consists of several, say ${n}_{b}$, columns, the resulting method is the block Arnoldi method. The matrix B can be chosen in an (almost) arbitrary way, but if eigenvector or subspace approximations are known, the use of these will speed up convergence. Among the advantages of using the block method are that clusters of eigenvalues are handled better [44]. Furthermore, all vectorvector operations become matrixmatrix operations which can be implemented much more efficiently by use of BLAS level 3 routines [45] and the matrixvector multiplication with the large matrix $\mathcal{A}$ can be carried out with a block of vectors. Furthermore, the heuristic part of the algorithm is more reliable in the block case, see below.
In the actual implementation of the block Arnoldi algorithm the terms ${\mathcal{A}}^{i}B$ are never explicitly formed. Instead an orthonormal basis ${Q}_{m}=[{V}_{1},{V}_{2},\dots ,{V}_{m}]$ of ${\mathcal{K}}_{m}(\mathcal{A},B)$ is generated by setting ${V}_{1}$ to be an orthonormal basis of $span(B)$ followed by iteratively forming ${V}_{i+1}$ by orthonormalization of $\mathcal{A}{V}_{i}$ against ${V}_{1},\dots ,{V}_{i}$. Classically, a variant of the GramSchmidt method is used for this task [35], but any other orthonormalization procedure may be employed. We use block Householder reflections [46, 47] that are very stable and attain high efficiency by using BLAS3 operations. Collecting all the ${V}_{i}$ and the orthonormalization coefficients ${H}_{i,j}$ results in the well known Arnoldi relation $\mathcal{A}{Q}_{m}={Q}_{m}{H}_{m}+{V}_{i+1}{H}_{m+1,m}{E}_{m}^{T}$ with an $(m\times m)$block Hessenberg matrix ${H}_{m}={({H}_{i,j})}_{i,j=1}^{m}$. The eigenvalues of ${H}_{m}$, called Ritz values, are used as approximations of eigenvalues of $\mathcal{A}$. Likewise, the eigenvectors of ${H}_{m}$, multiplied with ${Q}_{m}$, are used as approximate eigenvectors of $\mathcal{A}$.
A drawback of the block Arnoldi method is the necessity to store the basis ${Q}_{m}$ which grows for increasing m. A typical way out of this problem is to restart the algorithm at some point. Instead of restarting with a single new starting block it is possible to restart with a whole Arnoldi relation. For the vector Arnoldi method, two such implicit restarting schemes are common, one using a filter polynomial [35] and one using a reordering of the Schur form of ${H}_{m}$[48]. We are using the latter approach as the first one does not elegantly generalize to the block case [46].
The heuristic part of our method is that we assume that the block Arnoldi method really finds all eigenvalues in a circle around the shift σ. While quite often the eigenvalues do indeed appear and converge in the order of the distance to the shift, it is not rare that one or a group of eigenvalues converge slower than other farer away eigenvalues. However, in these cases usually the missed eigenvalues are present as (yet) unconverged Ritz values. Therefore, we use the unconverged Ritz value that is closest to the shift as the radius of a circle that is trusted to contain no missed eigenvalues.
In the implementation, we repeatedly run the block Arnoldi method for different shifts, possibly several at once in a distributed setting. Figure 3(right) shows the situation after a few iterations. Large parts of the trapezoidal region are covered, leaving only some small remaining regions to be searched. New shifts are placed inside the largest such white regions, until the whole trapezoidal region of interest is covered by trusted circles.
Of course, with such a covering approach an eigenpair could be computed by more than one Arnoldi run for different shifts. For that reason the freshly discovered eigenpairs have to be checked for being copies of already previously found pairs. To achieve this we consider a new eigenpair $({\lambda}_{\ast},{x}_{\ast})$ being a copy, if ${[{x}_{\ast}^{T},{\lambda}_{\ast}{x}_{\ast}^{T}]}^{T}$ is almost linearly dependent to the span of the vectors ${[{x}_{\mathrm{old}}^{T},{\lambda}_{\mathrm{old}}{x}_{\mathrm{old}}^{T}]}^{T}$ corresponding to every known eigenvalue ${\lambda}_{\mathrm{old}}$ sufficiently close to ${\lambda}_{\ast}$.
Number of required shifts and iterations to find all eigenvalues between 0 and $\{50,\dots ,250\}$ Hz.
0 Hz to:  50 Hz  100 Hz  150 Hz  200 Hz  250 Hz 

Number of found eigenvalues  20  52  129  217  346 
Number of shifts  1  3  3  5  7 
Number of iterations  39  139  136  294  437 
2.2.3 Evaluation of our approach
Our approach certainly does not use many novel ideas, but instead builds on mature and proven concepts. The method can run in a distributed setting processing several shifts at once, each one running in several processes. The matrix factorizations can be kept outofcore. Moreover, since the basis vectors are explicitly orthogonalized by a stable Householder scheme, the generated block Arnoldi basis is orthonormal to working precision. This is in sharp contrast to the basis generated by the nonsymmetric Lanczos method which typically looses linear independence after enough iterations and leads to socalled spurious eigenvalues [49, 50].
In our experiments the heuristic choice of the radii of trusted circles worked very well. In thousands of test cases with problems from SFE as well as randomly generated problems, it happened only once that this approach missed an eigenvalue. This happened with a block size of one, that is, with the vector Arnoldi method. The method worked fine for this case if a block size of at lest two was used. In our implementation the default block size is 8.
Unfortunately, the improved robustness of the blockArnoldi method compared with the standard Arnoldi method comes at the price of increased memory requirements to store the basis. This downside is somewhat mitigated, however, by the use of restarts.
As another downside of our approach, so far only quadratic eigenvalue problems can be solved, for truly nonlinear problems with ω dependence in K the methods is not directly applicable. On the other hand, the method can be applied to fluid or structure subsystems separately, or to the complete coupled system, and it tolerates a singular mass matrix.
3 Conclusions. The twoway street of industrial cooperation
We discussed the modeling, simulation and (at least the vision) of the optimization of acoustic fields. As an example of our industrial cooperation we studied frequency domain computations and modal reduction methods for the acoustic field inside a car, as well as the challenges of the resulting linear systems and eigenvalue problems. One lesson from the project was that the methods to be used in industrial practice cannot be constructed and implemented using textbook approaches. For instance, very often tricks have to be employed that increase the efficiency of the computation, but that lack a full mathematical understanding. This can lead to misunderstandings between engineers, programmers and mathematicians. A stronger communication and cooperation between these groups is necessary to address the described challenges. If this is achieved then all sides benefit from a cooperation. The work with SFE GmbH on the frequency response problem for interior acoustic field computation started out as a clear transfer project (a one way street), with the idea to transport the knowledge and knowhow about current linear system and eigenvalue solver technologies available in the academic environment into an industrial software environment.
But as we have discussed, already early on in the cooperation a lot of new research topics appeared that could not be treated within the current project (twowaystreet). Examples include recycling methods for a sequence of slowly changing linear systems, updating of preconditioners, guaranteed location of all eigenvalues inside a region of the complex plane, to name a few.
Declarations
Acknowledgements
We thank H. Zimmer and A. Hilliges from our industrial partner SFE for the permission to use the figures obtained in the cooperation, and we thank Tobias Brüll, Elena Teidelt, Matthias Miltenberger for their help in the implementation. Moreover, we thank two anonymous referees for their valuable comments on a preliminary version of this paper.
Authors’ Affiliations
References
 ANSYS, Inc.: Programmer’s manual for ANSYS. (2007)[http://www1.ansys.com/customer/content/documentation/110/ansys/aprog110.pdf]
 MSC. Software Corporation: MSC.Nastran 2005 Installation and Operations Guide. (2004)[http://www.mscsoftware.com/Products/CAETools/MSCNastran.aspx]
 Antoulas AC: Approximation of largescale dynamical systems. In Advances in Design and Control. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA; 2005.Google Scholar
 Benner P., Mehrmann V., Sorensen D. (Eds): Dimension Reduction of LargeScale Systems. Springer, Heidelberg; 2005.MATHGoogle Scholar
 Schmidt F, Deuflhard P: Discrete transparent boundary conditions for the numerical solution of Fresnel’s equation. Comput. Math. Appl. 1995, 29: 53–76. 10.1016/08981221(95)00037YMATHMathSciNetView ArticleGoogle Scholar
 Bishop R: The treatment of damping forces in vibration theory. Jl R. Aeronaut. Soc. 1955, 59: 738.Google Scholar
 Davis TA: Algorithm 832: UMFPACK V4.3  an unsymmetricpattern multifrontal method. ACM Trans. Math. Softw. 2004,30(2):196–199. 10.1145/992200.992206MATHView ArticleGoogle Scholar
 Schenk O, Gärtner K: Solving unsymmetric sparse systems of linear equations with PARDISO. Future Gener. Comput. Syst. 2004,20(3):475–487. [http://www.sciencedirect.com/science/article/B6V06–49NXY7JF/2/e8260e5d8f19639019cddea4776c024c] 10.1016/j.future.2003.07.011View ArticleGoogle Scholar
 MUMPS: MUltifrontal Massively Parallel Solver: Users’ Guide, (2009) http://graal.enslyon.fr/MUMPS
 Gupta, A.: WSMP: Watson Sparse Matrix Package, Part II. Direct solution of general sparse systems. IBM research report, IBM T. J. Watson Research Center. http://www.alphaworks.ibm.com/tech/wsmp (2000)Google Scholar
 Reid, J.K., Scott, J.A.: An efficient outofcore multifrontal solver for largescale unsymmetric element problems. Technical Report RALTR2007014, SFTC Rutherford Appleton Laboratory (2007)Google Scholar
 Saad Y, Schultz MH: GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Stat. Comput. 1986,7(3):856–869. 10.1137/0907058MATHMathSciNetView ArticleGoogle Scholar
 van der Vorst HA: BiCGSTAB: a fast and smoothly converging variant of BiCG for the solution of nonsymmetric linear systems. SIAM J. Sci. Stat. Comput. 1992,13(2):631–644. 10.1137/0913035MATHMathSciNetView ArticleGoogle Scholar
 Freund RW, Nachtigal NM: QMR: a quasiminimal residual method for nonHermitian linear systems. Numer. Math. 1991,60(3):315–339.MATHMathSciNetView ArticleGoogle Scholar
 BunseGerstner A, Stöver R: On a conjugate gradienttype method for solving complex symmetric linear systems. Linear Algebra Appl. 1999,287(1–3):105–123. 10.1016/S00243795(98)100915MATHMathSciNetView ArticleGoogle Scholar
 Gu GD, Simoncini V: Numerical solution of parameterdependent linear systems. Numer. Linear Algebra Appl. 2005,12(9):923–940. 10.1002/nla.442MATHMathSciNetView ArticleGoogle Scholar
 Simoncini V, Perotti F: On the numerical solution of $(\lambda^{2}A+\lambda B+C)x=b$(λ2A+λB+C)x=b and application to structural dynamics. SIAM J. Sci. Comput. 2002,23(6):1875–1897. (electronic) (electronic) 10.1137/S1064827501383373MATHMathSciNetView ArticleGoogle Scholar
 Hackbusch W: Multigrid Methods and Applications. SpringerVerlag, Berlin; 1985.View ArticleGoogle Scholar
 McCormick SF: Multilevel Adaptive Methods for Partial Differential Equations. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA; 1989.MATHView ArticleGoogle Scholar
 McCormick SF: Multilevel Projection Methods for Partial Differential Equations. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA; 1992.MATHView ArticleGoogle Scholar
 Ernst, O.G.: Fast numerical solution of exterior Helmholtz problems with radiation boundary condition by imbedding. Ph.D. thesis, Stanford University (1994)Google Scholar
 Ruge JW, Stüben K: Algebraic multigrid. In Multigrid Methods. SIAM, Philadelphia, PA; 1987:73–130.View ArticleGoogle Scholar
 Falgout RD, Jones JE, Yang UM: The design and implementation of hypre, a library of parallel high performance preconditioners. In Numerical Solution of Partial Differential Equations on Parallel Computers. Springer, Berlin; 2006:267–294.View ArticleGoogle Scholar
 Bangerth W, Rannacher R: Adaptive Finite Element Methods for Differential Equations. Birkhäuser, Basel, Ch.; 2003.MATHView ArticleGoogle Scholar
 Fischer PF: Projection techniques for iterative solution of $A\underline{x}=\underline{b}$Ax̲=b̲ with successive righthand sides. Comput. Methods Appl. Mech. Eng. 1998,163(1–4):193–204. 10.1016/S00457825(98)000127MATHView ArticleGoogle Scholar
 Numerical methods for largescale parameterdependent systems. Project C29, DFG Research Center Matheon, Berlin http://www.matheon.de/research/show_project.asp?id=172
 Gaul, A., Gutknecht, M.H., Liesen, J., Nabben, R.: Deflated and augmented Krylov subspace methods: Basic facts and a breakdownfree deflated MINRES. Preprint 749, DFG Research Center Matheon, Mathematics for key technologies, Berlin (2011).Google Scholar
 Mehrmann V, Voss H: Nonlinear eigenvalue problems: a challenge for modern eigenvalue methods. Mitt. der Ges. f. Angewandte Mathematik und Mechanik 2005, 27: 121–151.MathSciNetGoogle Scholar
 Bai Z, Demmel J: Using the matrix sign function to compute invariant subspaces. SIAM J. Matrix Anal. Appl. 1998, 19: 205–225. (electronic) (electronic) 10.1137/S0895479896297719MATHMathSciNetView ArticleGoogle Scholar
 Higham NJ: Functions of matrices. Theory and Computation. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA; 2008.MATHView ArticleGoogle Scholar
 Bennighof JK: Adaptive multilevel substructuring method for acoustic radiation and scattering from complex structures. In Computational Methods for Fluid/Structure Interaction. Edited by: Kalinowski A.J.. AMSE, New York; 1993:25–38.Google Scholar
 Adaptive solution of parametric eigenvalue problems for partial differential equations. Project C22, DFG research center Matheon, Berlin.[http://www.matheon.de/research/show_project.asp?id=126]
 Carstensen, C., Gedicke, J., Mehrmann, V., Miedlar, A.: An adaptive homotopy approach for nonselfadjoint eigenvalue problems. Preprint 718, DFG Research Center Matheon, Mathematics for key technologies, Berlin (2010)Google Scholar
 Miedlar, A.: Inexact adaptive finite element methods for elliptic PDE eigenvalue problems. Ph.D. thesis, Technische Universität Berlin, Inst. f. Mathematik (2011)Google Scholar
 Lehoucq RB, Sorensen DC, Yang C: ARPACK Users’ Guide. Solution of LargeScale Eigenvalue Problems with Implicitly Restarted Arnoldi Methods. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA; 1998.View ArticleGoogle Scholar
 Saad Y: Numerical Methods for Large Eigenvalue Problems. Manchester University Press, Manchester; 1992.MATHGoogle Scholar
 Schwetlick, H., Schreiber, K.: A primaldual JacobiDavidsonlike method for nonlinear eigenvalue problems. Internal Report ZIHIR0613, Techn. Univ. Dresden, Zentrum für Informationsdienste und Hochleistungsrechnen (2006)Google Scholar
 Schwetlick, H., Schreiber, K.: Nonlinear Rayleigh functionals. Linear Algebra and Its Applications. In Press, Corrected Proof (2010). doi:10.1016/j.laa.2010.06.048Google Scholar
 Sleijpen GLG, Booten AGL, Fokkema DR, Van der Vorst HA: JacobiDavidson type methods for generalized eigenproblems and polynomial eigenproblems. BIT Numer. Math. 1996,36(3):595–633. 10.1007/BF01731936MATHMathSciNetView ArticleGoogle Scholar
 Sleijpen GLG, Van der Vorst HA: A JacobiDavidson iteration method for linear eigenvalue problems. SIAM J. Matrix Anal. Appl. 1996,17(2):401–425. 10.1137/S0895479894270427MATHMathSciNetView ArticleGoogle Scholar
 Beyn, W.: An integral method for solving nonlinear eigenvalue problems. ArXiv eprints (2010) Beyn, W.: An integral method for solving nonlinear eigenvalue problems. ArXiv eprints (2010)Google Scholar
 Tisseur F, Meerbergen K: The quadratic eigenvalue problem. SIAM Rev. 2001,43(2):235–286. 10.1137/S0036144500381988MATHMathSciNetView ArticleGoogle Scholar
 Arnoldi WE: The principle of minimized iteration in the solution of the matrix eigenvalue problem. Q. Appl. Math. 1951, 9: 17–29.MATHMathSciNetGoogle Scholar
 Lehoucq, R., Maschhoff, K.: Implementation of an implicitly restarted block Arnoldi method. Preprint MCSP649–0297, Argonne National Laboratory, Argonne, IL (1997)Google Scholar
 Dongarra JJ, Croz JD, Hammarling S, Duff I: A set of level 3 basic linear algebra subprograms. ACM Trans. Math. Softw. 1990, 16: 1–17. 10.1145/77626.79170MATHView ArticleGoogle Scholar
 Baglama J: Augmented block Householder Arnoldi method. Linear Algebra Appl. 2008,429(10):2315–2334. 10.1016/j.laa.2007.12.021MATHMathSciNetView ArticleGoogle Scholar
 Schreiber R, Van Loan C: A storageefficient WY representation for products of Householder transformations. SIAM J. Sci. Stat. Comput. 1989, 10: 53–57. 10.1137/0910005MATHMathSciNetView ArticleGoogle Scholar
 Stewart GW: A KrylovSchur algorithm for large eigenproblems. SIAM J. Matrix Anal. Appl. 2001/02,23(3):601–614.MathSciNetView ArticleGoogle Scholar
 Cullum J, Willoughby RA: Lanczos and the computation in specified intervals of the spectrum of large, sparse real symmetric matrices. In Sparse Matrix Proceedings 1978 (Sympos. Sparse Matrix Comput., Knoxville, Tenn., 1978). SIAM, Philadelphia, Pa; 1979:220–255.Google Scholar
 Freund RW, Gutknecht MH, Nachtigal NM: An implementation of the lookahead Lanczos algorithm for nonHermitian matrices. SIAM J. Sci. Comput. 1993, 14: 137–158. 10.1137/0914009MATHMathSciNetView ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.