Nonlinear eigenproblem

Last updated

In mathematics, a nonlinear eigenproblem, sometimes nonlinear eigenvalue problem, is a generalization of the (ordinary) eigenvalue problem to equations that depend nonlinearly on the eigenvalue. Specifically, it refers to equations of the form

Contents

where is a vector, and is a matrix-valued function of the number . The number is known as the (nonlinear) eigenvalue, the vector as the (nonlinear) eigenvector, and as the eigenpair. The matrix is singular at an eigenvalue .

Definition

In the discipline of numerical linear algebra the following definition is typically used. [1] [2] [3] [4]

Let , and let be a function that maps scalars to matrices. A scalar is called an eigenvalue, and a nonzero vector is called a right eigevector if . Moreover, a nonzero vector is called a left eigevector if , where the superscript denotes the Hermitian transpose. The definition of the eigenvalue is equivalent to , where denotes the determinant. [1]

The function is usually required to be a holomorphic function of (in some domain ).

In general, could be a linear map, but most commonly it is a finite-dimensional, usually square, matrix.

Definition: The problem is said to be regular if there exists a such that . Otherwise it is said to be singular. [1] [4]

Definition: An eigenvalue is said to have algebraic multiplicity if is the smallest integer such that the th derivative of with respect to , in is nonzero. In formulas that but for . [1] [4]

Definition: The geometric multiplicity of an eigenvalue is the dimension of the nullspace of . [1] [4]

Special cases

The following examples are special cases of the nonlinear eigenproblem.

Jordan chains

Definition: Let be an eigenpair. A tuple of vectors is called a Jordan chain if

for , where denotes the th derivative of with respect to and evaluated in . The vectors are called generalized eigenvectors, is called the length of the Jordan chain, and the maximal length a Jordan chain starting with is called the rank of . [1] [4]


Theorem: [1] A tuple of vectors is a Jordan chain if and only if the function has a root in and the root is of multiplicity at least for , where the vector valued function is defined as

Mathematical software


Eigenvector nonlinearity

Eigenvector nonlinearities is a related, but different, form of nonlinearity that is sometimes studied. In this case the function maps vectors to matrices, or sometimes hermitian matrices to hermitian matrices. [13] [14]

Related Research Articles

In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the transpose of . More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of

In mathematics, specifically functional analysis, a trace-class operator is a linear operator for which a trace may be defined, such that the trace is a finite number independent of the choice of basis used to compute the trace. This trace of trace-class operators generalizes the trace of matrices studied in linear algebra. All trace-class operators are compact operators.

<span class="mw-page-title-main">Spherical harmonics</span> Special mathematical functions defined on the surface of a sphere

In mathematics and physical science, spherical harmonics are special functions defined on the surface of a sphere. They are often employed in solving partial differential equations in many scientific fields. A list of the spherical harmonics is available in Table of spherical harmonics.

In mathematics, particularly in functional analysis, the spectrum of a bounded linear operator is a generalisation of the set of eigenvalues of a matrix. Specifically, a complex number is said to be in the spectrum of a bounded linear operator if

<span class="mw-page-title-main">Jordan normal form</span> Form of a matrix indicating its eigenvalues and their algebraic multiplicities

In linear algebra, a Jordan normal form, also known as a Jordan canonical form (JCF), is an upper triangular matrix of a particular form called a Jordan matrix representing a linear operator on a finite-dimensional vector space with respect to some basis. Such a matrix has each non-zero off-diagonal entry equal to 1, immediately above the main diagonal, and with identical diagonal entries to the left and below them.

In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.

In linear algebra, the Frobenius companion matrix of the monic polynomial

In numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of an iterative method. Arnoldi finds an approximation to the eigenvalues and eigenvectors of general matrices by constructing an orthonormal basis of the Krylov subspace, which makes it particularly useful when dealing with large sparse matrices.

Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard. With recent advancements in computing and optimization algorithms, convex programming is nearly as straightforward as linear programming.

In linear algebra, it is often important to know which vectors have their directions unchanged by a linear transformation. An eigenvector or characteristic vector is such a vector. Thus an eigenvector of a linear transformation is scaled by a constant factor when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .

In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied.

The Lanczos algorithm is an iterative method devised by Cornelius Lanczos that is an adaptation of power methods to find the "most useful" eigenvalues and eigenvectors of an Hermitian matrix, where is often but not necessarily much smaller than . Although computationally efficient in principle, the method as initially formulated was not useful, due to its numerical instability.

In mathematics, delay differential equations (DDEs) are a type of differential equation in which the derivative of the unknown function at a certain time is given in terms of the values of the function at previous times. DDEs are also called time-delay systems, systems with aftereffect or dead-time, hereditary systems, equations with deviating argument, or differential-difference equations. They belong to the class of systems with the functional state, i.e. partial differential equations (PDEs) which are infinite dimensional, as opposed to ordinary differential equations (ODEs) having a finite dimensional state vector. Four points may give a possible explanation of the popularity of DDEs:

  1. Aftereffect is an applied problem: it is well known that, together with the increasing expectations of dynamic performances, engineers need their models to behave more like the real process. Many processes include aftereffect phenomena in their inner dynamics. In addition, actuators, sensors, and communication networks that are now involved in feedback control loops introduce such delays. Finally, besides actual delays, time lags are frequently used to simplify very high order models. Then, the interest for DDEs keeps on growing in all scientific areas and, especially, in control engineering.
  2. Delay systems are still resistant to many classical controllers: one could think that the simplest approach would consist in replacing them by some finite-dimensional approximations. Unfortunately, ignoring effects which are adequately represented by DDEs is not a general alternative: in the best situation, it leads to the same degree of complexity in the control design. In worst cases, it is potentially disastrous in terms of stability and oscillations.
  3. Voluntary introduction of delays can benefit the control system.
  4. In spite of their complexity, DDEs often appear as simple infinite-dimensional models in the very complex area of partial differential equations (PDEs).

In the mathematical discipline of matrix theory, a Jordan matrix, named after Camille Jordan, is a block diagonal matrix over a ring R, where each block along the diagonal, called a Jordan block, has the following form:

<span class="mw-page-title-main">Vertex model</span>

A vertex model is a type of statistical mechanics model in which the Boltzmann weights are associated with a vertex in the model. This contrasts with a nearest-neighbour model, such as the Ising model, in which the energy, and thus the Boltzmann weight of a statistical microstate is attributed to the bonds connecting two neighbouring particles. The energy associated with a vertex in the lattice of particles is thus dependent on the state of the bonds which connect it to adjacent vertices. It turns out that every solution of the Yang–Baxter equation with spectral parameters in a tensor product of vector spaces yields an exactly-solvable vertex model.

In linear algebra, if are complex matrices for some nonnegative integer , and , then the matrix pencil of degree is the matrix-valued function defined on the complex numbers

In mathematics, an eigenvalue perturbation problem is that of finding the eigenvectors and eigenvalues of a system that is perturbed from one with known eigenvectors and eigenvalues . This is useful for studying how sensitive the original system's eigenvectors and eigenvalues are to changes in the system. This type of analysis was popularized by Lord Rayleigh, in his investigation of harmonic vibrations of a string perturbed by small inhomogeneities.

In mathematics, the quadratic eigenvalue problem (QEP), is to find scalar eigenvalues , left eigenvectors and right eigenvectors such that

In mathematics, Welch bounds are a family of inequalities pertinent to the problem of evenly spreading a set of unit vectors in a vector space. The bounds are important tools in the design and analysis of certain methods in telecommunication engineering, particularly in coding theory. The bounds were originally published in a 1974 paper by L. R. Welch.

In mathematics, the method of steepest descent or saddle-point method is an extension of Laplace's method for approximating an integral, where one deforms a contour integral in the complex plane to pass near a stationary point, in roughly the direction of steepest descent or stationary phase. The saddle-point approximation is used with integrals in the complex plane, whereas Laplace’s method is used with real integrals.

References

  1. 1 2 3 4 5 6 7 8 Güttel, Stefan; Tisseur, Françoise (2017). "The nonlinear eigenvalue problem" (PDF). Acta Numerica. 26: 1–94. doi:10.1017/S0962492917000034. ISSN   0962-4929. S2CID   46749298.
  2. Ruhe, Axel (1973). "Algorithms for the Nonlinear Eigenvalue Problem". SIAM Journal on Numerical Analysis. 10 (4): 674–689. Bibcode:1973SJNA...10..674R. doi:10.1137/0710059. ISSN   0036-1429. JSTOR   2156278.
  3. Mehrmann, Volker; Voss, Heinrich (2004). "Nonlinear eigenvalue problems: a challenge for modern eigenvalue methods". GAMM-Mitteilungen. 27 (2): 121–152. doi:10.1002/gamm.201490007. ISSN   1522-2608. S2CID   14493456.
  4. 1 2 3 4 5 Voss, Heinrich (2014). "Nonlinear eigenvalue problems" (PDF). In Hogben, Leslie (ed.). Handbook of Linear Algebra (2 ed.). Boca Raton, FL: Chapman and Hall/CRC. ISBN   9781466507289.
  5. Hernandez, Vicente; Roman, Jose E.; Vidal, Vicente (September 2005). "SLEPc: A scalable and flexible toolkit for the solution of eigenvalue problems". ACM Transactions on Mathematical Software. 31 (3): 351–362. doi:10.1145/1089014.1089019. S2CID   14305707.
  6. Betcke, Timo; Higham, Nicholas J.; Mehrmann, Volker; Schröder, Christian; Tisseur, Françoise (February 2013). "NLEVP: A Collection of Nonlinear Eigenvalue Problems". ACM Transactions on Mathematical Software. 39 (2): 1–28. doi:10.1145/2427023.2427024. S2CID   4271705.
  7. Polizzi, Eric (2020). "FEAST Eigenvalue Solver v4.0 User Guide". arXiv: 2002.04807 [cs.MS].
  8. Güttel, Stefan; Van Beeumen, Roel; Meerbergen, Karl; Michiels, Wim (1 January 2014). "NLEIGS: A Class of Fully Rational Krylov Methods for Nonlinear Eigenvalue Problems". SIAM Journal on Scientific Computing. 36 (6): A2842–A2864. Bibcode:2014SJSC...36A2842G. doi:10.1137/130935045.
  9. Van Beeumen, Roel; Meerbergen, Karl; Michiels, Wim (2015). "Compact rational Krylov methods for nonlinear eigenvalue problems". SIAM Journal on Matrix Analysis and Applications. 36 (2): 820–838. doi:10.1137/140976698. S2CID   18893623.
  10. Lietaert, Pieter; Meerbergen, Karl; Pérez, Javier; Vandereycken, Bart (13 April 2022). "Automatic rational approximation and linearization of nonlinear eigenvalue problems". IMA Journal of Numerical Analysis. 42 (2): 1087–1115. arXiv: 1801.08622 . doi:10.1093/imanum/draa098.
  11. Berljafa, Mario; Steven, Elsworth; Güttel, Stefan (15 July 2020). "An overview of the example collection". index.m. Retrieved 31 May 2022.
  12. Jarlebring, Elias; Bennedich, Max; Mele, Giampaolo; Ringh, Emil; Upadhyaya, Parikshit (23 November 2018). "NEP-PACK: A Julia package for nonlinear eigenproblems". arXiv: 1811.09592 [math.NA].
  13. Jarlebring, Elias; Kvaal, Simen; Michiels, Wim (2014-01-01). "An Inverse Iteration Method for Eigenvalue Problems with Eigenvector Nonlinearities". SIAM Journal on Scientific Computing. 36 (4): A1978–A2001. arXiv: 1212.0417 . Bibcode:2014SJSC...36A1978J. doi:10.1137/130910014. ISSN   1064-8275. S2CID   16959079.
  14. Upadhyaya, Parikshit; Jarlebring, Elias; Rubensson, Emanuel H. (2021). "A density matrix approach to the convergence of the self-consistent field iteration". Numerical Algebra, Control & Optimization. 11 (1): 99. arXiv: 1809.02183 . doi: 10.3934/naco.2020018 . ISSN   2155-3297.

Further reading