Dynamic mode decomposition

Last updated

Dynamic mode decomposition (DMD) is a dimensionality reduction algorithm developed by Peter J. Schmid and Joern Sesterhenn in 2008. [1] [2] Given a time series of data, DMD computes a set of modes each of which is associated with a fixed oscillation frequency and decay/growth rate. For linear systems in particular, these modes and frequencies are analogous to the normal modes of the system, but more generally, they are approximations of the modes and eigenvalues of the composition operator (also called the Koopman operator). Due to the intrinsic temporal behaviors associated with each mode, DMD differs from dimensionality reduction methods such as principal component analysis, which computes orthogonal modes that lack predetermined temporal behaviors. Because its modes are not orthogonal, DMD-based representations can be less parsimonious than those generated by PCA. However, they can also be more physically meaningful because each mode is associated with a damped (or driven) sinusoidal behavior in time.

Contents

Overview

Dynamic mode decomposition was first introduced by Schmid as a numerical procedure for extracting dynamical features from flow data. [3]

The data takes the form of a snapshot sequence

where is the -th snapshot of the flow field, and is a data matrix whose columns are the individual snapshots. These snapshots are assumed to be related via a linear mapping that defines a linear dynamical system

that remains approximately the same over the duration of the sampling period. Written in matrix form, this implies that

where is the vector of residuals that accounts for behaviors that cannot be described completely by , , , and . Regardless of the approach, the output of DMD is the eigenvalues and eigenvectors of , which are referred to as the DMD eigenvalues and DMD modes respectively.

Algorithm

There are two methods for obtaining these eigenvalues and modes. The first is Arnoldi-like, which is useful for theoretical analysis due to its connection with Krylov methods. The second is a singular value decomposition (SVD) based approach that is more robust to noise in the data and to numerical errors.

The Arnoldi approach

In fluids applications, the size of a snapshot, , is assumed to be much larger than the number of snapshots , so there are many equally valid choices of . The original DMD algorithm picks so that each of the snapshots in can be expressed as linear combinations of the snapshots in . Because most of the snapshots appear in both data sets, this representation is error free for all snapshots except , which is written as

where is a set of coefficients DMD must identify and is the residual. In total,

where is the companion matrix

The vector can be computed by solving a least squares problem, which minimizes the overall residual. In particular if we take the QR decomposition of , then .

In this form, DMD is a type of Arnoldi method, and therefore the eigenvalues of are approximations of the eigenvalues of . Furthermore, if is an eigenvector of , then is an approximate eigenvector of . The reason an eigendecomposition is performed on rather than is because is much smaller than , so the computational cost of DMD is determined by the number of snapshots rather than the size of a snapshot.

The SVD-based approach

Instead of computing the companion matrix , the SVD-based approach yields the matrix that is related to via a similarity transform. To do this, assume we have the SVD of . Then

Equivalent to the assumption made by the Arnoldi-based approach, we choose such that the snapshots in can be written as the linear superposition of the columns in , which is equivalent to requiring that they can be written as the superposition of POD modes. With this restriction, minimizing the residual requires that it is orthogonal to the POD basis (i.e., ). Then multiplying both sides of the equation above by yields , which can be manipulated to obtain

Because and are related via similarity transform, the eigenvalues of are the eigenvalues of , and if is an eigenvector of , then is an eigenvector of .

In summary, the SVD-based approach is as follows:

  1. Split the time series of data in into the two matrices and .
  2. Compute the SVD of .
  3. Form the matrix , and compute its eigenvalues and eigenvectors .
  4. The -th DMD eigenvalue is and -th DMD mode is the .

The advantage of the SVD-based approach over the Arnoldi-like approach is that noise in the data and numerical truncation issues can be compensated for by truncating the SVD of . As noted in [3] accurately computing more than the first couple modes and eigenvalues can be difficult on experimental data sets without this truncation step.

Theoretical and algorithmic advancements

Since its inception in 2010, a considerable amount of work has focused on understanding and improving DMD. One of the first analyses of DMD by Rowley et al. [4] established the connection between DMD and the Koopman operator, and helped to explain the output of DMD when applied to nonlinear systems. Since then, a number of modifications have been developed that either strengthen this connection further or enhance the robustness and applicability of the approach.

In addition to the algorithms listed here, similar application-specific techniques have been developed. For example, like DMD, Prony's method represents a signal as the superposition of damped sinusoids. In climate science, linear inverse modeling is also strongly connected with DMD. [20] For a more comprehensive list, see Tu et al. [7]

Examples

Trailing edge of a profile

Fig 1 Trailing edge Vortices (Entropy) Joukowsky Karman Vortex Street s.png
Fig 1 Trailing edge Vortices (Entropy)

The wake of an obstacle in the flow may develop a Kármán vortex street. The Fig.1 shows the shedding of a vortex behind the trailing edge of a profile. The DMD-analysis was applied to 90 sequential Entropy fields (animated gif (1.9MB)) and yield an approximated eigenvalue-spectrum as depicted below. The analysis was applied to the numerical results, without referring to the governing equations. The profile is seen in white. The white arcs are the processor boundaries since the computation was performed on a parallel computer using different computational blocks.

Fig.2 DMD-spectrum Joukowsky Karman Vortex Street Spectrum.png
Fig.2 DMD-spectrum

Roughly a third of the spectrum was highly damped (large, negative ) and is not shown. The dominant shedding mode is shown in the following pictures. The image to the left is the real part, the image to the right, the imaginary part of the eigenvector.

Joukowsky Karman Vortex Street EV real.png Joukowsky Karman Vortex Street EV imag.png

Again, the entropy-eigenvector is shown in this picture. The acoustic contents of the same mode is seen in the bottom half of the next plot. The top half corresponds to the entropy mode as above.

Joukowsky Karman Vortex Street ps EV real.png

Synthetic example of a traveling pattern

The DMD analysis assumes a pattern of the form where is any of the independent variables of the problem, but has to be selected in advance. Take for example the pattern

With the time as the preselected exponential factor.

A sample is given in the following figure with , and . The left picture shows the pattern without, the right with noise added. The amplitude of the random noise is the same as that of the pattern.

Q periodic.png Q periodic noise.png

A DMD analysis is performed with 21 synthetically generated fields using a time interval , limiting the analysis to .

Synth Spec.png

The spectrum is symmetric and shows three almost undamped modes (small negative real part), whereas the other modes are heavily damped. Their numerical values are respectively. The real one corresponds to the mean of the field, whereas corresponds to the imposed pattern with . Yielding a relative error of −1/1000. Increasing the noise to 10 times the signal value yields about the same error. The real and imaginary part of one of the latter two eigenmodes is depicted in the following figure.

Detected Eigenmode.png

See also

Several other decompositions of experimental data exist. If the governing equations are available, an eigenvalue decomposition might be feasible.

Related Research Articles

<span class="mw-page-title-main">Principal component analysis</span> Method of data analysis

Principal component analysis (PCA) is a linear dimensionality reduction technique with applications in data analysis, visualization and data preprocessing. This is accomplished by linearly transforming the data onto a new coordinate system such that the directions capturing the largest variation in the data can be easily identified. Many studies use the first two principal components in order to plot the data in two dimensions and to visually identify clusters of closely related data points. Principal component analysis has applications in many fields such as population genetics, microbiome studies, and atmospheric science.

In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized. This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.

<span class="mw-page-title-main">Singular value decomposition</span> Matrix decomposition

In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix into a rotation, followed by a rescaling followed by another rotation. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any matrix. It is related to the polar decomposition.

In mathematics, a complex square matrix A is normal if it commutes with its conjugate transpose A*:

In mathematics, a Hermitian matrix is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j:

In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems.

<span class="mw-page-title-main">Jordan normal form</span> Form of a matrix indicating its eigenvalues and their algebraic multiplicities

In linear algebra, a Jordan normal form, also known as a Jordan canonical form (JCF), is an upper triangular matrix of a particular form called a Jordan matrix representing a linear operator on a finite-dimensional vector space with respect to some basis. Such a matrix has each non-zero off-diagonal entry equal to 1, immediately above the main diagonal, and with identical diagonal entries to the left and below them.

<span class="mw-page-title-main">Eigenface</span> Set of eigenvectors used in the computer vision problem of human face recognition

An eigenface is the name given to a set of eigenvectors when used in the computer vision problem of human face recognition. The approach of using eigenfaces for recognition was developed by Sirovich and Kirby and used by Matthew Turk and Alex Pentland in face classification. The eigenvectors are derived from the covariance matrix of the probability distribution over the high-dimensional vector space of face images. The eigenfaces themselves form a basis set of all images used to construct the covariance matrix. This produces dimension reduction by allowing the smaller set of basis images to represent the original training images. Classification can be achieved by comparing how faces are represented by the basis set.

In the mathematical discipline of linear algebra, the Schur decomposition or Schur triangulation, named after Issai Schur, is a matrix decomposition. It allows one to write an arbitrary complex square matrix as unitarily equivalent to an upper triangular matrix whose diagonal elements are the eigenvalues of the original matrix.

In mathematics, a measure-preserving dynamical system is an object of study in the abstract formulation of dynamical systems, and ergodic theory in particular. Measure-preserving systems obey the Poincaré recurrence theorem, and are a special case of conservative systems. They provide the formal, mathematical basis for a broad range of physical systems, and, in particular, many systems from classical mechanics as well as systems in thermodynamic equilibrium.

In mathematics and its applications, a Sturm–Liouville problem is a second-order linear ordinary differential equation of the form:

The spectrum of a linear operator that operates on a Banach space is a fundamental concept of functional analysis. The spectrum consists of all scalars such that the operator does not have a bounded inverse on . The spectrum has a standard decomposition into three parts:

In mathematics, the polar decomposition of a square real or complex matrix is a factorization of the form , where is a unitary matrix and is a positive semi-definite Hermitian matrix, both square and of the same size.

In matrix theory, the Perron–Frobenius theorem, proved by Oskar Perron (1907) and Georg Frobenius (1912), asserts that a real square matrix with positive entries has a unique eigenvalue of largest magnitude and that eigenvalue is real. The corresponding eigenvector can be chosen to have strictly positive components, and also asserts a similar statement for certain classes of nonnegative matrices. This theorem has important applications to probability theory ; to the theory of dynamical systems ; to economics ; to demography ; to social networks ; to Internet search engines (PageRank); and even to ranking of American football teams. The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors is Edmund Landau.

In linear algebra, it is often important to know which vectors have their directions unchanged by a linear transformation. An eigenvector or characteristic vector is such a vector. Thus an eigenvector of a linear transformation is scaled by a constant factor when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .

In the mathematical discipline of functional analysis, the concept of a compact operator on Hilbert space is an extension of the concept of a matrix acting on a finite-dimensional vector space; in Hilbert space, compact operators are precisely the closure of finite-rank operators in the topology induced by the operator norm. As such, results from matrix theory can sometimes be extended to compact operators using similar arguments. By contrast, the study of general operators on infinite-dimensional spaces often requires a genuinely different approach.

Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics. It is a subfield of numerical analysis, and a type of linear algebra. Computers use floating-point arithmetic and cannot exactly represent irrational data, so when a computer algorithm is applied to a matrix of data, it can sometimes increase the difference between a number stored in the computer and the true number that it is an approximation of. Numerical linear algebra uses properties of vectors and matrices to develop computer algorithms that minimize the error introduced by the computer, and is also concerned with ensuring that the algorithm is as efficient as possible.

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

<span class="mw-page-title-main">Singular spectrum analysis</span> Nonparametric spectral estimation method

In time series analysis, singular spectrum analysis (SSA) is a nonparametric spectral estimation method. It combines elements of classical time series analysis, multivariate statistics, multivariate geometry, dynamical systems and signal processing. Its roots lie in the classical Karhunen (1946)–Loève spectral decomposition of time series and random fields and in the Mañé (1981)–Takens (1981) embedding theorem. SSA can be an aid in the decomposition of time series into a sum of components, each having a meaningful interpretation. The name "singular spectrum analysis" relates to the spectrum of eigenvalues in a singular value decomposition of a covariance matrix, and not directly to a frequency domain decomposition.

Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) is a matrix-free method for finding the largest eigenvalues and the corresponding eigenvectors of a symmetric generalized eigenvalue problem

References

  1. Schmid, Peter J; Sesterhenn, Joern (28 July 2008). "Dynamic mode decomposition of numerical and experimental data". Bulletin of the American Physical Society, Sixty-First Annual Meeting of the APS Division of Fluid Dynamics. 53 (15). Retrieved 1 August 2023.
  2. Schmid, Peter J (10 August 2010). "Journal of Fluid Mechanics Article contents Abstract References Dynamic mode decomposition of numerical and experimental data". Journal of Fluid Dynamics. 656: 5–28. doi:10.1017/S0022112010001217. S2CID   11334986 . Retrieved 1 August 2023.
  3. 1 2 P.J. Schmid. "Dynamic mode decomposition of numerical and experimental data." Journal of Fluid Mechanics 656.1 (2010): 5–28.
  4. C.W. Rowley, I Mezic, S. Bagheri, P. Schlatter, and D.S. Henningson, "Spectral analysis of nonlinear flows." Journal of Fluid Mechanics 641 (2009): 85-113
  5. K.K. Chen, J.H. Tu, and C.W. Rowley, "Variants of dynamic mode decomposition: boundary condition, Koopman, and Fourier analyses." Journal of Nonlinear Science 22 (2012): 887-915.
  6. A. Wynn, D. S. Pearson, B. Ganapathisubramani and P. J. Goulart, "Optimal mode decomposition for unsteady flows." Journal of Fluid Mechanics 733 (2013): 473-503
  7. 1 2 Tu, Rowley, Luchtenburg, Brunton, and Kutz (December 2014). "On Dynamic Mode Decomposition: Theory and Applications". American Institute of Mathematical Sciences. 1 (2): 391–421. arXiv: 1312.0041 . doi:10.3934/jcd.2014.1.391. S2CID   46419148.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  8. M.R. Jovanovic, P.J. Schmid, and J.W. Nichols, "Sparsity-promoting dynamic mode decomposition." Physics of Fluids 26 (2014)
  9. J.N. Kutz, X. Fu, and S.L. Brunton, "Multi-resolution dynamic mode decomposition." arXiv preprint arXiv:1506.00564 (2015).
  10. M.O. Williams , I.G. Kevrekidis, C.W. Rowley, "A Data–Driven Approximation of the Koopman Operator: Extending Dynamic Mode Decomposition." Journal of Nonlinear Science 25 (2015): 1307-1346.
  11. Li, Qianxiao; Dietrich, Felix; Bollt, Erik M.; Kevrekidis, Ioannis G (2018). "Extended dynamic mode decomposition with dictionary learning: A data-driven adaptive spectral decomposition of the Koopman operator". Chaos: An Interdisciplinary Journal of Nonlinear Science. 27 (10): 103111. arXiv: 1707.00225 . doi:10.1063/1.4993854. PMID   29092410. S2CID   41957686.
  12. Gulina, Marvyn; Mauroy, Alexandre (2021). "Two methods to approximate the Koopman operator with a reservoir computer". Chaos: An Interdisciplinary Journal of Nonlinear Science. 31 (2): 023116. arXiv: 2008.10263 . Bibcode:2021Chaos..31b3116G. doi:10.1063/5.0026380. PMID   33653036. S2CID   221266743.
  13. Colbrook, Matthew J.; Townsend, Alex (2023-07-27). "Rigorous data‐driven computation of spectral properties of Koopman operators for dynamical systems". Communications on Pure and Applied Mathematics. 77: 221–283. arXiv: 2111.14889 . doi: 10.1002/cpa.22125 . ISSN   0010-3640.
  14. Colbrook, Matthew J.; Ayton, Lorna J.; Szőke, Máté (2023-01-17). "Residual dynamic mode decomposition: robust and verified Koopmanism". Journal of Fluid Mechanics. 955: A21. arXiv: 2205.09779 . doi: 10.1017/jfm.2022.1052 . ISSN   0022-1120.
  15. Baddoo, Peter J.; Herrmann, Benjamin; McKeon, Beverley J.; Nathan Kutz, J.; Brunton, Steven L. (2023-03-01). "Physics-informed dynamic mode decomposition". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 479 (2271). doi: 10.1098/rspa.2022.0576 . ISSN   1364-5021.
  16. Colbrook, Matthew J. (2023-06-30). "The mpEDMD Algorithm for Data-Driven Computations of Measure-Preserving Dynamical Systems". SIAM Journal on Numerical Analysis. 61 (3): 1585–1608. arXiv: 2209.02244 . doi:10.1137/22M1521407. ISSN   0036-1429.
  17. J.L. Proctor, S.L. Brunton, and J.N. Kutz, "Dynamic mode decomposition with control." arXiv preprint arXiv:1409.6358 (2014).
  18. M.S. Hemati, C.W. Rowley, E.A. Deem, and L.N. Cattafesta, "De-Biasing the Dynamic Mode Decomposition for Applied Koopman Spectral Analysis of Noisy Datasets." arXiv preprint arXiv:1502.03854 (2015).
  19. Taylor-King, Jake P.; Riseth, Asbjørn N.; Macnair, Will; Claassen, Manfred (2020-01-10). "Dynamic distribution decomposition for single-cell snapshot time series identifies subpopulations and trajectories during iPSC reprogramming". PLOS Computational Biology. 16 (1): e1007491. Bibcode:2020PLSCB..16E7491T. doi: 10.1371/journal.pcbi.1007491 . ISSN   1553-7358. PMC   6953770 . PMID   31923173.
  20. Penland, Magorian, Cecile, Theresa (1993). "Prediction of Niño 3 Sea Surface Temperatures Using Linear Inverse Modeling". J. Climate. 6 (6): 1067. Bibcode:1993JCli....6.1067P. doi: 10.1175/1520-0442(1993)006<1067:PONSST>2.0.CO;2 .{{cite journal}}: CS1 maint: multiple names: authors list (link)