Proper orthogonal decomposition

Last updated

The proper orthogonal decomposition is a numerical method that enables a reduction in the complexity of computer intensive simulations such as computational fluid dynamics and structural analysis (like crash simulations). Typically in fluid dynamics and turbulences analysis, it is used to replace the Navier–Stokes equations by simpler models to solve. [1]

Contents

It belongs to a class of algorithms called model order reduction (or in short model reduction). What it essentially does is to train a model based on simulation data. To this extent, it can be associated with the field of machine learning.

POD and PCA

The main use of POD is to decompose a physical field (like pressure, temperature in fluid dynamics or stress and deformation in structural analysis), depending on the different variables that influence its physical behaviors. As its name hints, it's operating an Orthogonal Decomposition along with the Principal Components of the field. As such it is assimilated with the principal component analysis from Pearson in the field of statistics, or the singular value decomposition in linear algebra because it refers to eigenvalues and eigenvectors of a physical field. In those domains, it is associated with the research of Karhunen [2] and Loève, [3] and their Karhunen–Loève theorem.

Mathematical expression

The first idea behind the Proper Orthogonal Decomposition (POD), as it was originally formulated in the domain of fluid dynamics to analyze turbulences, is to decompose a random vector field u(x, t) into a set of deterministic spatial functions Φk(x) modulated by random time coefficients ak(t) so that:

POD snapshots POD snapshots.png
POD snapshots

The first step is to sample the vector field over a period of time in what we call snapshots (as display in the image of the POD snapshots). This snapshot method [4] is averaging the samples over the space dimension n, and correlating them with each other along the time samples p:

with n spatial elements, and p time samples

The next step is to compute the covariance matrix C

Mor-diagram.png

We then compute the eigenvalues and eigenvectors of C and we order them from the largest eigenvalue to the smallest.

We obtain n eigenvalues λ1,...,λn and a set of n eigenvectors arranged as columns in an n × n matrix Φ:

Related Research Articles

Bra–ket notation, also called Dirac notation, is a notation for linear algebra and linear operators on complex vector spaces together with their dual space both in the finite-dimensional and infinite-dimensional case. It is specifically designed to ease the types of calculations that frequently come up in quantum mechanics. Its use in quantum mechanics is quite widespread.

In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized. This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.

In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any base. The characteristic equation, also known as the determinantal equation, is the equation obtained by equating the characteristic polynomial to zero.

In physics, an operator is a function over a space of physical states onto another space of physical states. The simplest example of the utility of operators is the study of symmetry. Because of this, they are useful tools in classical mechanics. Operators are even more important in quantum mechanics, where they form an intrinsic part of the formulation of the theory.

In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal, and the supradiagonal/upper diagonal. For example, the following matrix is tridiagonal:

In quantum mechanics, a quantum operation is a mathematical formalism used to describe a broad class of transformations that a quantum mechanical system can undergo. This was first discussed as a general stochastic transformation for a density matrix by George Sudarshan. The quantum operation formalism describes not only unitary time evolution or symmetry transformations of isolated systems, but also the effects of measurement and transient interactions with an environment. In the context of quantum computation, a quantum operation is called a quantum channel.

In the theory of stochastic processes, the Karhunen–Loève theorem, also known as the Kosambi–Karhunen–Loève theorem states that a stochastic process can be represented as an infinite linear combination of orthogonal functions, analogous to a Fourier series representation of a function on a bounded interval. The transformation is also known as Hotelling transform and eigenvector transform, and is closely related to principal component analysis (PCA) technique widely used in image processing and in data analysis in many fields.

<span class="mw-page-title-main">Euler's rotation theorem</span> Movement with a fixed point is rotation

In geometry, Euler's rotation theorem states that, in three-dimensional space, any displacement of a rigid body such that a point on the rigid body remains fixed, is equivalent to a single rotation about some axis that runs through the fixed point. It also means that the composition of two rotations is also a rotation. Therefore the set of rotations has a group structure, known as a rotation group.

In linear algebra, a generalized eigenvector of an matrix is a vector which satisfies certain criteria which are more relaxed than those for an (ordinary) eigenvector.

In mathematics, the polar decomposition of a square real or complex matrix is a factorization of the form , where is a unitary matrix and is a positive semi-definite Hermitian matrix, both square and of the same size.

In matrix theory, the Perron–Frobenius theorem, proved by Oskar Perron (1907) and Georg Frobenius (1912), asserts that a real square matrix with positive entries has a unique eigenvalue of largest magnitude and that eigenvalue is real. The corresponding eigenvector can be chosen to have strictly positive components, and also asserts a similar statement for certain classes of nonnegative matrices. This theorem has important applications to probability theory ; to the theory of dynamical systems ; to economics ; to demography ; to social networks ; to Internet search engines (PageRank); and even to ranking of American football teams. The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors is Edmund Landau.

In linear algebra, it is often important to know which vectors have their directions unchanged by a linear transformation. An eigenvector or characteristic vector is such a vector. Thus an eigenvector of a linear transformation is scaled by a constant factor when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .

In mathematics, the square root of a matrix extends the notion of square root from numbers to matrices. A matrix B is said to be a square root of A if the matrix product BB is equal to A.

In mathematics, in the area of wavelet analysis, a refinable function is a function which fulfils some kind of self-similarity. A function is called refinable with respect to the mask if

In mathematics a positive map is a map between C*-algebras that sends positive elements to positive elements. A completely positive map is one which satisfies a stronger, more robust condition.

In the mathematical discipline of functional analysis, the concept of a compact operator on Hilbert space is an extension of the concept of a matrix acting on a finite-dimensional vector space; in Hilbert space, compact operators are precisely the closure of finite-rank operators in the topology induced by the operator norm. As such, results from matrix theory can sometimes be extended to compact operators using similar arguments. By contrast, the study of general operators on infinite-dimensional spaces often requires a genuinely different approach.

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

Dynamic mode decomposition (DMD) is a dimensionality reduction algorithm developed by Peter J. Schmid and Joern Sesterhenn in 2008. Given a time series of data, DMD computes a set of modes each of which is associated with a fixed oscillation frequency and decay/growth rate. For linear systems in particular, these modes and frequencies are analogous to the normal modes of the system, but more generally, they are approximations of the modes and eigenvalues of the composition operator. Due to the intrinsic temporal behaviors associated with each mode, DMD differs from dimensionality reduction methods such as principal component analysis, which computes orthogonal modes that lack predetermined temporal behaviors. Because its modes are not orthogonal, DMD-based representations can be less parsimonious than those generated by PCA. However, they can also be more physically meaningful because each mode is associated with a damped sinusoidal behavior in time.

Functional principal component analysis (FPCA) is a statistical method for investigating the dominant modes of variation of functional data. Using this method, a random function is represented in the eigenbasis, which is an orthonormal basis of the Hilbert space L2 that consists of the eigenfunctions of the autocovariance operator. FPCA represents functional data in the most parsimonious way, in the sense that when using a fixed number of basis functions, the eigenfunction basis explains more variation than any other basis expansion. FPCA can be applied for representing random functions, or in functional regression and classification.

In statistics, modes of variation are a continuously indexed set of vectors or functions that are centered at a mean and are used to depict the variation in a population or sample. Typically, variation patterns in the data can be decomposed in descending order of eigenvalues with the directions represented by the corresponding eigenvectors or eigenfunctions. Modes of variation provide a visualization of this decomposition and an efficient description of variation around the mean. Both in principal component analysis (PCA) and in functional principal component analysis (FPCA), modes of variation play an important role in visualizing and describing the variation in the data contributed by each eigencomponent. In real-world applications, the eigencomponents and associated modes of variation aid to interpret complex data, especially in exploratory data analysis (EDA).

References

  1. Berkooz, G; Holmes, P; Lumley, J L (January 1993). "The Proper Orthogonal Decomposition in the Analysis of Turbulent Flows". Annual Review of Fluid Mechanics. 25 (1): 539–575. Bibcode:1993AnRFM..25..539B. doi:10.1146/annurev.fl.25.010193.002543. ISSN   0066-4189.
  2. Karhunen, Kari (1946). Zur spektral theorie stochasticher prozesse.
  3. David, F. N.; Loeve, M. (December 1955). "Probability Theory". Biometrika. 42 (3/4): 540. doi:10.2307/2333409. ISSN   0006-3444. JSTOR   2333409.
  4. Sirovich, Lawrence (1987-10-01). "Turbulence and the dynamics of coherent structures. I. Coherent structures". Quarterly of Applied Mathematics. 45 (3): 561–571. doi: 10.1090/qam/910462 . ISSN   0033-569X.