Rybicki Press algorithm

Last updated • 2 min readFrom Wikipedia, The Free Encyclopedia
Extended Sparse Matrix arising from a
10
x
10
{\displaystyle 10\times 10}
semi-separable matrix whose semi-separable rank is
4
{\displaystyle 4} Extended Sparse Matrix.png
Extended Sparse Matrix arising from a semi-separable matrix whose semi-separable rank is

The Rybicki–Press algorithm is a fast algorithm for inverting a matrix whose entries are given by , where [1] and where the are sorted in order. [2] The key observation behind the Rybicki-Press observation is that the matrix inverse of such a matrix is always a tridiagonal matrix (a matrix with nonzero entries only on the main diagonal and the two adjoining ones), and tridiagonal systems of equations can be solved efficiently (to be more precise, in linear time). [1] It is a computational optimization of a general set of statistical methods developed to determine whether two noisy, irregularly sampled data sets are, in fact, dimensionally shifted representations of the same underlying function. [3] [4] The most common use of the algorithm is in the detection of periodicity in astronomical observations[ verification needed ], such as for detecting quasars. [4]

Contents

The method has been extended to the Generalized Rybicki-Press algorithm for inverting matrices with entries of the form . [2] The key observation in the Generalized Rybicki-Press (GRP) algorithm is that the matrix is a semi-separable matrix with rank (that is, a matrix whose upper half, not including the main diagonal, is that of some matrix with matrix rank and whose lower half is also that of some possibly different rank matrix [2] ) and so can be embedded into a larger band matrix (see figure on the right), whose sparsity structure can be leveraged to reduce the computational complexity. As the matrix has a semi-separable rank of , the computational complexity of solving the linear system or of calculating the determinant of the matrix scales as , thereby making it attractive for large matrices. [2]

The fact that matrix is a semi-separable matrix also forms the basis for celerite [5] library, which is a library for fast and scalable Gaussian process regression in one dimension [6] with implementations in C++, Python, and Julia. The celerite method [6] also provides an algorithm for generating samples from a high-dimensional distribution. The method has found attractive applications in a wide range of fields,[ which? ] especially in astronomical data analysis. [7] [8]

See also

Related Research Articles

<span class="mw-page-title-main">Principal component analysis</span> Method of data analysis

Principal component analysis (PCA) is a linear dimensionality reduction technique with applications in exploratory data analysis, visualization and data preprocessing.

In probability theory and statistics, a Gaussian process is a stochastic process, such that every finite collection of those random variables has a multivariate normal distribution. The distribution of a Gaussian process is the joint distribution of all those random variables, and as such, it is a distribution over functions with a continuous domain, e.g. time or space.

<span class="mw-page-title-main">Nonlinear dimensionality reduction</span> Projection of data onto lower-dimensional manifolds

Nonlinear dimensionality reduction, also known as manifold learning, is any of various related techniques that aim to project high-dimensional data, potentially existing across non-linear manifolds which cannot be adequately captured by linear decomposition methods, onto lower-dimensional latent manifolds, with the goal of either visualizing the data in the low-dimensional space, or learning the mapping itself. The techniques described below can be understood as generalizations of linear decomposition methods used for dimensionality reduction, such as singular value decomposition and principal component analysis.

In linear algebra, a Householder transformation is a linear transformation that describes a reflection about a plane or hyperplane containing the origin. The Householder transformation was used in a 1958 paper by Alston Scott Householder.

In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.

In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal, and the supradiagonal/upper diagonal. For example, the following matrix is tridiagonal:

Belief propagation, also known as sum–product message passing, is a message-passing algorithm for performing inference on graphical models, such as Bayesian networks and Markov random fields. It calculates the marginal distribution for each unobserved node, conditional on any observed nodes. Belief propagation is commonly used in artificial intelligence and information theory, and has demonstrated empirical success in numerous applications, including low-density parity-check codes, turbo codes, free energy approximation, and satisfiability.

In mathematics, a symplectic integrator (SI) is a numerical integration scheme for Hamiltonian systems. Symplectic integrators form the subclass of geometric integrators which, by definition, are canonical transformations. They are widely used in nonlinear dynamics, molecular dynamics, discrete element methods, accelerator physics, plasma physics, quantum physics, and celestial mechanics.

The Peres–Horodecki criterion is a necessary condition, for the joint density matrix of two quantum mechanical systems and , to be separable. It is also called the PPT criterion, for positive partial transpose. In the 2×2 and 2×3 dimensional cases the condition is also sufficient. It is used to decide the separability of mixed states, where the Schmidt decomposition does not apply. The theorem was discovered in 1996 by Asher Peres and the Horodecki family

In quantum mechanics, separable states are multipartite quantum states that can be written as a convex combination of product states. Product states are multipartite quantum states that can be written as a tensor product of states in each space. The physical intuition behind these definitions is that product states have no correlation between the different degrees of freedom, while separable states might have correlations, but all such correlations can be explained as due to a classical random variable, as opposed as being due to entanglement.

<span class="mw-page-title-main">Computational complexity of mathematical operations</span> Algorithmic runtime requirements for common math procedures

The following tables list the computational complexity of various algorithms for common mathematical operations.

Sparse principal component analysis is a technique used in statistical analysis and, in particular, in the analysis of multivariate data sets. It extends the classic method of principal component analysis (PCA) for the reduction of dimensionality of data by introducing sparsity structures to the input variables.

<span class="mw-page-title-main">Tracy–Widom distribution</span> Probability distribution

The Tracy–Widom distribution is a probability distribution from random matrix theory introduced by Craig Tracy and Harold Widom. It is the distribution of the normalized largest eigenvalue of a random Hermitian matrix. The distribution is defined as a Fredholm determinant.

Determining the number of clusters in a data set, a quantity often labelled k as in the k-means algorithm, is a frequent problem in data clustering, and is a distinct issue from the process of actually solving the clustering problem.

A Jacobi operator, also known as Jacobi matrix, is a symmetric linear operator acting on sequences which is given by an infinite tridiagonal matrix. It is commonly used to specify systems of orthonormal polynomials over a finite, positive Borel measure. This operator is named after Carl Gustav Jacob Jacobi.

The Harrow–Hassidim–Lloyd algorithm or HHL algorithm is a quantum algorithm for numerically solving a system of linear equations, designed by Aram Harrow, Avinatan Hassidim, and Seth Lloyd. The algorithm estimates the result of a scalar measurement on the solution vector to a given linear system of equations.

Boson sampling is a restricted model of non-universal quantum computation introduced by Scott Aaronson and Alex Arkhipov after the original work of Lidror Troyansky and Naftali Tishby, that explored possible usage of boson scattering to evaluate expectation values of permanents of matrices. The model consists of sampling from the probability distribution of identical bosons scattered by a linear interferometer. Although the problem is well defined for any bosonic particles, its photonic version is currently considered as the most promising platform for a scalable implementation of a boson sampling device, which makes it a non-universal approach to linear optical quantum computing. Moreover, while not universal, the boson sampling scheme is strongly believed to implement computing tasks which are hard to implement with classical computers by using far fewer physical resources than a full linear-optical quantum computing setup. This advantage makes it an ideal candidate for demonstrating the power of quantum computation in the near term.

The quantum Fisher information is a central quantity in quantum metrology and is the quantum analogue of the classical Fisher information. It is one of the central quantities used to qualify the utility of an input state, especially in Mach–Zehnder interferometer-based phase or parameter estimation. It is shown that the quantum Fisher information can also be a sensitive probe of a quantum phase transition. The quantum Fisher information of a state with respect to the observable is defined as

This is a comparison of statistical analysis software that allows doing inference with Gaussian processes often using approximations.

Probabilistic numerics is an active field of study at the intersection of applied mathematics, statistics, and machine learning centering on the concept of uncertainty in computation. In probabilistic numerics, tasks in numerical analysis such as finding numerical solutions for integration, linear algebra, optimization and simulation and differential equations are seen as problems of statistical, probabilistic, or Bayesian inference.

References

  1. 1 2 Rybicki, George B.; Press, William H. (1995). "Class of fast methods for processing Irregularly sampled or otherwise inhomogeneous one-dimensional data". Physical Review Letters. 74 (7): 1060–1063. arXiv: comp-gas/9405004 . Bibcode:1995PhRvL..74.1060R. doi:10.1103/PhysRevLett.74.1060. PMID   10058924. S2CID   17436268. Open Access logo PLoS transparent.svg
  2. 1 2 3 4 Ambikasaran, Sivaram (2015-12-01). "Generalized Rybicki Press algorithm". Numerical Linear Algebra with Applications. 22 (6): 1102–1114. arXiv: 1409.7852 . doi:10.1002/nla.2003. ISSN   1099-1506. S2CID   1627477.
  3. Rybicki, George B.; Press, William H. (October 1992). "Interpolation, realization, and reconstruction of noisy, irregularly sampled data". The Astrophysical Journal. 398: 169. Bibcode:1992ApJ...398..169R. doi:10.1086/171845. Open Access logo PLoS transparent.svg
  4. 1 2 MacLeod, C. L.; Brooks, K.; Ivezic, Z.; Kochanek, C. S.; Gibson, R.; Meisner, A.; Kozlowski, S.; Sesar, B.; Becker, A. C. (2011-02-10). "Quasar Selection Based on Photometric Variability". The Astrophysical Journal. 728 (1): 26. arXiv: 1009.2081 . Bibcode:2011ApJ...728...26M. doi:10.1088/0004-637X/728/1/26. ISSN   0004-637X. S2CID   28219978.
  5. "celerite — celerite 0.3.0 documentation". celerite.readthedocs.io. Retrieved 2018-04-05.
  6. 1 2 Foreman-Mackey, Daniel; Agol, Eric; Ambikasaran, Sivaram; Angus, Ruth (2017). "Fast and Scalable Gaussian Process Modeling with Applications to Astronomical Time Series". The Astronomical Journal. 154 (6): 220. arXiv: 1703.09710 . Bibcode:2017AJ....154..220F. doi: 10.3847/1538-3881/aa9332 . ISSN   1538-3881. S2CID   88521913.
  7. Foreman-Mackey, Daniel (2018). "Scalable Backpropagation for Gaussian Processes using Celerite". Research Notes of the AAS. 2 (1): 31. arXiv: 1801.10156 . Bibcode:2018RNAAS...2...31F. doi: 10.3847/2515-5172/aaaf6c . ISSN   2515-5172. S2CID   102481482.
  8. Parviainen, Hannu (2018). "Bayesian Methods for Exoplanet Science". Handbook of Exoplanets. Springer, Cham. pp. 1–24. arXiv: 1711.03329 . doi:10.1007/978-3-319-30648-3_149-1. ISBN   9783319306483.