This section needs additional citations for verification .(August 2022) |
In biochemistry, elementary modes [1] may be considered minimal realizable flow patterns through a biochemical network that can sustain a steady state. This means that elementary modes cannot be decomposed further into simpler pathways. All possible flows through a network can be constructed from linear combinations of the elementary modes.
The set of elementary modes for a given network is unique (up to an arbitrary scaling factor). Given the fundamental nature of elementary modes in relation to uniqueness and non-decomposability, the term 'pathway' can be defined as an elementary mode. Note that the set of elementary modes will change as the set of expressed enzymes change during transitions from one cell state to another. Mathematically, the set of elementary modes is defined as the set of flux vectors that satisfy the steady state condition
where is the stoichiometry matrix, is the vector of rates, the vector of steady state floating (or internal) species and , the vector of system parameters.
An important condition is that the rate of each irreversible reaction must be non-negative,
A more formal definition is given by: [2]
An elementary mode, , is defined as a vector of fluxes, , such that the three conditions listed in the following criteria are satisfied.
Consider a simple branched pathway with all three steps irreversible. Such a pathway will admit two elementary modes which are indicated in thicked (or red) reaction lines.
Because both and are irreversible, and elementary mode lying on both these reactions is not possible since it would mean one reactions going against its thermodynamic direction. Each mode in this system satisfies the three conditions described above. The first condition is steady state, that is for each mode , it has to be true that .
Algebraically the two modes are given by:
By substituting each of these vectors into , it is easy to show that condition one is satisfied. For condition two we must ensure that all reactions that are irreversible have positive entries in the corresponding elements of the elementary modes. Since all three reactions in the branch are irreversible and all entries in the elementary modes are positive, condition two is satisfied.
Finally, to satisfy condition three, we must ask whether we can decompose the two elementary modes into other paths that can sustain a steady state while using the same non-zero entries in the elementary mode. In this example, it is impossible to decompose the elementary modes any further without disrupting the ability to sustain a steady state. Therefore, with all three conditions satisfied, we can conclude that the two vectors shown above are elementary modes.
All possible flows through a network can be constructed from linear combinations of the elementary modes, that is:
such that the entire space of flows through a network can be described. must be greater than or equal to zero to ensure that irreversible steps aren't inadvertently made to go in the reverse direction. For example, the following is a possible steady-state flow in the branched pathway.
If one of the outflow steps in the simple branched pathway is made reversible, an additional elementary mode becomes available, representing the flow between the two outflow branches. An additional mode emerges because, with only the first two modes, it is impossible to represent a flow between the two branches because the scaling factor, , cannot be negative (which would be required to reverse the flow).
The Wikipedia page Metabolic pathway defines a pathway as "a metabolic pathway is a linked series of chemical reactions occurring within a cell". This means that any sequence of reactions can be labeled a metabolic pathway. However, as metabolism was being uncovered, groups of reactions were assigned specific labels, such as glycolysis, Krebs Cycle, or Serine biosynthesis. Often the categorization was based on common chemistry or identification of an input and output. For example, serine biosynthesis starts at 3-phosphoglycerate and ends at serine. This is a somewhat ad hoc means for defining pathways, particularly when pathways are dynamic structures, changing as environmental result in changes in gene expression. For example, the Kreb Cycle is often not cyclic as depicted in textbooks. In E. coli and other bacteria, it is only cyclic during aerobic growth on acetate or fatty acids. [3] Instead, under anaerobiosis, its enzymes function as two distinct biosynthetic pathways producing succinyl-CoA and α-ketoglutarate.
It has therefore been proposed [4] to define a pathway as either a single elementary mode or some combination of elementary modes. The added advantage is that the set of elementary modes is unique and non-decomposable to simpler pathways. A single elementary mode can therefore be thought of as an elementary pathway. Note that the set of elementary modes will change as the set of expressed enzymes change during transitions from one cell state to another.
Elementary modes, therefore, provide an unambiguous definition of a pathway.
Condition three relates to the non-decomposability of an elementary mode and is partly what makes elementary modes interesting. The two other important features as indicated before are pathway uniqueness and thermodynamic plausibility. Decomposition implies that it is possible to represent a mode as a combination of two or more other modes. For example, a mode might be composed from two other modes, and :
If a mode can be decomposed, does it mean that the mode is not an elementary mode? Condition three provides a rule to determine whether a decomposition means that a given mode is an elementary mode or not. If it is only possible to decompose a given mode by introducing enzymes that are not used in the mode, then the mode is elementary. That is, is there more than one way to generate a pathway (i.e., something that can sustain a steady state) with the enzymes currently used in the mode? If so, then the mode is not elementary. To illustrate this subtle condition, consider the pathway shown in below.
This pathway represents a stylized rendition of glycolysis. Step three and six are reversible and correspond to triose phosphate isomerase and glycerol 3-phosphate dehydrogenase, respectively.
The network has four elementary flux modes, which are shown in the figure below.
The elementary flux mode vectors are shown below:
Note that it is possible to have negative entries in the set of elementary modes because they will correspond to the reversible steps. Of interest is the observation that the fourth vector, (where represents the transpose) can be formed from the sum of the first and second vectors. This suggests that the fourth vector is not an elementary mode.
However, this decomposition only works because we have introduced a new enzyme, (triose phosphate isomerase) which is not used in . It is, in fact impossible to decompose into pathways that can sustain a steady state with only the five steps, , used in the elementary mode. We conclude therefore that is an elementary mode.
In physics, the Lorentz transformations are a six-parameter family of linear transformations from a coordinate frame in spacetime to another frame that moves at a constant velocity relative to the former. The respective inverse transformation is then parameterized by the negative of this velocity. The transformations are named after the Dutch physicist Hendrik Lorentz.
Quadratic programming (QP) is the process of solving certain mathematical optimization problems involving quadratic functions. Specifically, one seeks to optimize a multivariate quadratic function subject to linear constraints on the variables. Quadratic programming is a type of nonlinear programming.
In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the row vector transpose of More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of
In the theory of vector spaces, a set of vectors is said to be linearly independent if there exists no nontrivial linear combination of the vectors that equals the zero vector. If such a linear combination exists, then the vectors are said to be linearly dependent. These concepts are central to the definition of dimension.
In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix into a rotation, followed by a rescaling followed by another rotation. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any matrix. It is related to the polar decomposition.
In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is , while an example of a 3×3 diagonal matrix is. An identity matrix of any size, or any multiple of it is a diagonal matrix called a scalar matrix, for example, . In geometry, a diagonal matrix may be used as a scaling matrix, since matrix multiplication with it results in changing scale (size) and possibly also shape; only a scalar matrix results in uniform change in scale.
In linear algebra, a square matrix is called diagonalizable or non-defective if it is similar to a diagonal matrix. That is, if there exists an invertible matrix and a diagonal matrix such that . This is equivalent to . This property exists for any linear map: for a finite-dimensional vector space , a linear map is called diagonalizable if there exists an ordered basis of consisting of eigenvectors of . These definitions are equivalent: if has a matrix representation as above, then the column vectors of form a basis consisting of eigenvectors of , and the diagonal entries of are the corresponding eigenvalues of ; with respect to this eigenvector basis, is represented by .
In linear algebra, a QR decomposition, also known as a QR factorization or QU factorization, is a decomposition of a matrix A into a product A = QR of an orthonormal matrix Q and an upper triangular matrix R. QR decomposition is often used to solve the linear least squares (LLS) problem and is the basis for a particular eigenvalue algorithm, the QR algorithm.
In the mathematical discipline of linear algebra, the Schur decomposition or Schur triangulation, named after Issai Schur, is a matrix decomposition. It allows one to write an arbitrary complex square matrix as unitarily similar to an upper triangular matrix whose diagonal elements are the eigenvalues of the original matrix.
In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.
In linear algebra and functional analysis, a projection is a linear transformation from a vector space to itself such that . That is, whenever is applied twice to any vector, it gives the same result as if it were applied once. It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object.
In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives the exponential map between a matrix Lie algebra and the corresponding Lie group.
The Rayleigh–Ritz method is a direct numerical method of approximating eigenvalues, originated in the context of solving physical boundary value problems and named after Lord Rayleigh and Walther Ritz.
In linear algebra, an eigenvector or characteristic vector is a vector that has its direction unchanged by a given linear transformation. More precisely, an eigenvector, , of a linear transformation, , is scaled by a constant factor, , when the linear transformation is applied to it: . It is often important to know these vectors in linear algebra. The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .
In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied.
In geometry and linear algebra, a principal axis is a certain line in a Euclidean space associated with a ellipsoid or hyperboloid, generalizing the major and minor axes of an ellipse or hyperbola. The principal axis theorem states that the principal axes are perpendicular, and gives a constructive procedure for finding them.
In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.
In physics and continuum mechanics, deformation is the change in the shape or size of an object. It has dimension of length with SI unit of metre (m). It is quantified as the residual displacement of particles in a non-rigid body, from an initial configuration to a final configuration, excluding the body's average translation and rotation. A configuration is a set containing the positions of all particles of the body.
A differential equation is a mathematical equation for an unknown function of one or several variables that relates the values of the function itself and its derivatives of various orders. A matrix differential equation contains more than one function stacked into vector form with a matrix relating the functions to their derivatives.
Dynamic Substructuring (DS) is an engineering tool used to model and analyse the dynamics of mechanical systems by means of its components or substructures. Using the dynamic substructuring approach one is able to analyse the dynamic behaviour of substructures separately and to later on calculate the assembled dynamics using coupling procedures. Dynamic substructuring has several advantages over the analysis of the fully assembled system:
{{cite journal}}
: Cite journal requires |journal=
(help) Text was copied from this source, which is available under a Creative Commons Attribution 4.0 International License.