Invariants of tensors

Last updated

In mathematics, in the fields of multilinear algebra and representation theory, the principal invariants of the second rank tensor are the coefficients of the characteristic polynomial [1]

Contents

,

where is the identity operator and represent the polynomial's eigenvalues.

More broadly, any scalar-valued function is an invariant of if and only if for all orthogonal . This means that a formula expressing an invariant in terms of components, , will give the same result for all Cartesian bases. For example, even though individual diagonal components of will change with a change in basis, the sum of diagonal components will not change.

Properties

The principal invariants do not change with rotations of the coordinate system (they are objective, or in more modern terminology, satisfy the principle of material frame-indifference) and any function of the principal invariants is also objective.

Calculation of the invariants of rank two tensors

In a majority of engineering applications, the principal invariants of (rank two) tensors of dimension three are sought, such as those for the right Cauchy-Green deformation tensor.

Principal invariants

For such tensors, the principal invariants are given by:

For symmetric tensors, these definitions are reduced. [2]

The correspondence between the principal invariants and the characteristic polynomial of a tensor, in tandem with the Cayley–Hamilton theorem reveals that

where is the second-order identity tensor.

Main invariants

In addition to the principal invariants listed above, it is also possible to introduce the notion of main invariants [3] [4]

which are functions of the principal invariants above. These are the coefficients of the characteristic polynomial of the deviator , such that it is traceless. The separation of a tensor into a component that is a multiple of the identity and a traceless component is standard in hydrodynamics, where the former is called isotropic, providing the modified pressure, and the latter is called deviatoric, providing shear effects.

Mixed invariants

Furthermore, mixed invariants between pairs of rank two tensors may also be defined. [4]

Calculation of the invariants of order two tensors of higher dimension

These may be extracted by evaluating the characteristic polynomial directly, using the Faddeev-LeVerrier algorithm for example.

Calculation of the invariants of higher order tensors

The invariants of rank three, four, and higher order tensors may also be determined. [5]

Engineering applications

A scalar function that depends entirely on the principal invariants of a tensor is objective, i.e., independent of rotations of the coordinate system. This property is commonly used in formulating closed-form expressions for the strain energy density, or Helmholtz free energy, of a nonlinear material possessing isotropic symmetry. [6]

This technique was first introduced into isotropic turbulence by Howard P. Robertson in 1940 where he was able to derive Kármán–Howarth equation from the invariant principle. [7] George Batchelor and Subrahmanyan Chandrasekhar exploited this technique and developed an extended treatment for axisymmetric turbulence. [8] [9] [10]

Invariants of non-symmetric tensors

A real tensor in 3D (i.e., one with a 3x3 component matrix) has as many as six independent invariants, three being the invariants of its symmetric part and three characterizing the orientation of the axial vector of the skew-symmetric part relative to the principal directions of the symmetric part. For example, if the Cartesian components of are

the first step would be to evaluate the axial vector associated with the skew-symmetric part. Specifically, the axial vector has components

The next step finds the principal values of the symmetric part of . Even though the eigenvalues of a real non-symmetric tensor might be complex, the eigenvalues of its symmetric part will always be real and therefore can be ordered from largest to smallest. The corresponding orthonormal principal basis directions can be assigned senses to ensure that the axial vector points within the first octant. With respect to that special basis, the components of are

The first three invariants of are the diagonal components of this matrix: (equal to the ordered principal values of the tensor's symmetric part). The remaining three invariants are the axial vector's components in this basis: . Note: the magnitude of the axial vector, , is the sole invariant of the skew part of , whereas these distinct three invariants characterize (in a sense) "alignment" between the symmetric and skew parts of . Incidentally, it is a myth that a tensor is positive definite if its eigenvalues are positive. Instead, it is positive definite if and only if the eigenvalues of its symmetric part are positive.

See also

Related Research Articles

Quadratic programming (QP) is the process of solving certain mathematical optimization problems involving quadratic functions. Specifically, one seeks to optimize a multivariate quadratic function subject to linear constraints on the variables. Quadratic programming is a type of nonlinear programming.

<span class="mw-page-title-main">Singular value decomposition</span> Matrix decomposition

In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any matrix. It is related to the polar decomposition.

<span class="mw-page-title-main">Square matrix</span> Matrix with the same number of rows and columns

In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied.

In mathematics, a complex square matrix A is normal if it commutes with its conjugate transpose A*:

In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition

In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is , while an example of a 3×3 diagonal matrix is. An identity matrix of any size, or any multiple of it, is a diagonal matrix.

In linear algebra, a square matrix  is called diagonalizable or non-defective if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix  and a diagonal matrix such that , or equivalently . For a finite-dimensional vector space , a linear map  is called diagonalizable if there exists an ordered basis of  consisting of eigenvectors of . These definitions are equivalent: if  has a matrix representation as above, then the column vectors of  form a basis consisting of eigenvectors of , and the diagonal entries of  are the corresponding eigenvalues of ; with respect to this eigenvector basis,  is represented by .Diagonalization is the process of finding the above  and .

<span class="mw-page-title-main">Jordan normal form</span> Form of a matrix indicating its eigenvalues and their algebraic multiplicities

In linear algebra, a Jordan normal form, also known as a Jordan canonical form (JCF), is an upper triangular matrix of a particular form called a Jordan matrix representing a linear operator on a finite-dimensional vector space with respect to some basis. Such a matrix has each non-zero off-diagonal entry equal to 1, immediately above the main diagonal, and with identical diagonal entries to the left and below them.

In mathematics, the Hessian matrix or Hessian is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was developed in the 19th century by the German mathematician Ludwig Otto Hesse and later named after him. Hesse originally used the term "functional determinants".

In mathematics, a Casimir element is a distinguished element of the center of the universal enveloping algebra of a Lie algebra. A prototypical example is the squared angular momentum operator, which is a Casimir element of the three-dimensional rotation group.

In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.

In linear algebra, a circulant matrix is a square matrix in which all row vectors are composed of the same elements and each row vector is rotated one element to the right relative to the preceding row vector. It is a particular kind of Toeplitz matrix.

<span class="mw-page-title-main">Phase plane</span> Visual representation used in non-linear control system analysis

In applied mathematics, in particular the context of nonlinear system analysis, a phase plane is a visual display of certain characteristics of certain kinds of differential equations; a coordinate plane with axes being the values of the two state variables, say (x, y), or (q, p) etc. (any pair of variables). It is a two-dimensional case of the general n-dimensional phase space.

In linear algebra, an eigenvector or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted by , is the factor by which the eigenvector is scaled.

<span class="mw-page-title-main">Cauchy stress tensor</span> Representation of mechanical stress at every point within a deformed 3D object

In continuum mechanics, the Cauchy stress tensor, true stress tensor, or simply called the stress tensor is a second order tensor named after Augustin-Louis Cauchy. The tensor consists of nine components that completely define the state of stress at a point inside a material in the deformed state, placement, or configuration. The tensor relates a unit-length direction vector e to the traction vector T(e) across an imaginary surface perpendicular to e:

In geometry and linear algebra, a principal axis is a certain line in a Euclidean space associated with an ellipsoid or hyperboloid, generalizing the major and minor axes of an ellipse or hyperbola. The principal axis theorem states that the principal axes are perpendicular, and gives a constructive procedure for finding them.

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

This article derives the main properties of rotations in 3-dimensional space.

The Lambda2 method, or Lambda2 vortex criterion, is a vortex core line detection algorithm that can adequately identify vortices from a three-dimensional fluid velocity field. The Lambda2 method is Galilean invariant, which means it produces the same results when a uniform velocity field is added to the existing velocity field or when the field is translated.

The Batchelor–Chandrasekhar equation is the evolution equation for the scalar functions, defining the two-point velocity correlation tensor of a homogeneous axisymmetric turbulence, named after George Batchelor and Subrahmanyan Chandrasekhar. They developed the theory of homogeneous axisymmetric turbulence based on Howard P. Robertson's work on isotropic turbulence using an invariant principle. This equation is an extension of Kármán–Howarth equation from isotropic to axisymmetric turbulence.

References

  1. Spencer, A. J. M. (1980). Continuum Mechanics. Longman. ISBN   0-582-44282-6.
  2. Kelly, PA. "Lecture Notes: An introduction to Solid Mechanics" (PDF). Retrieved 27 May 2018.
  3. Kindlmann, G. "Tensor Invariants and their Gradients" (PDF). Retrieved 24 Jan 2019.
  4. 1 2 Schröder, Jörg; Neff, Patrizio (2010). Poly-, Quasi- and Rank-One Convexity in Applied Mechanics. Springer.
  5. Betten, J. (1987). "Irreducible Invariants of Fourth-Order Tensors". Mathematical Modelling. 8: 29–33. doi: 10.1016/0270-0255(87)90535-5 .
  6. Ogden, R. W. (1984). Non-Linear Elastic Deformations. Dover.
  7. Robertson, H. P. (1940). "The Invariant Theory of Isotropic Turbulence". Mathematical Proceedings of the Cambridge Philosophical Society. Cambridge University Press. 36 (2): 209–223. Bibcode:1940PCPS...36..209R. doi:10.1017/S0305004100017199.
  8. Batchelor, G. K. (1946). "The Theory of Axisymmetric Turbulence". Proc. R. Soc. Lond. A. 186 (1007): 480–502. Bibcode:1946RSPSA.186..480B. doi: 10.1098/rspa.1946.0060 .
  9. Chandrasekhar, S. (1950). "The Theory of Axisymmetric Turbulence". Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 242 (855): 557–577. Bibcode:1950RSPTA.242..557C. doi:10.1098/rsta.1950.0010. S2CID   123358727.
  10. Chandrasekhar, S. (1950). "The Decay of Axisymmetric Turbulence". Proc. R. Soc. A. 203 (1074): 358–364. Bibcode:1950RSPSA.203..358C. doi:10.1098/rspa.1950.0143. S2CID   121178989.