Biochemical systems theory

Last updated

Biochemical systems theory is a mathematical modelling framework for biochemical systems, based on ordinary differential equations (ODE), in which biochemical processes are represented using power-law expansions in the variables of the system.

Contents

This framework, which became known as Biochemical Systems Theory, has been developed since the 1960s by Michael Savageau, Eberhard Voit and others for the systems analysis of biochemical processes. [1] According to Cornish-Bowden (2007) they "regarded this as a general theory of metabolic control, which includes both metabolic control analysis and flux-oriented theory as special cases". [2]

Representation

The dynamics of a species is represented by a differential equation with the structure:

where Xi represents one of the nd variables of the model (metabolite concentrations, protein concentrations or levels of gene expression). j represents the nf biochemical processes affecting the dynamics of the species. On the other hand, ij (stoichiometric coefficient), j (rate constants) and fjk (kinetic orders) are two different kinds of parameters defining the dynamics of the system.

The principal difference of power-law models with respect to other ODE models used in biochemical systems is that the kinetic orders can be non-integer numbers. A kinetic order can have even negative value when inhibition is modeled. In this way, power-law models have a higher flexibility to reproduce the non-linearity of biochemical systems.

Models using power-law expansions have been used during the last 35 years to model and analyze several kinds of biochemical systems including metabolic networks, genetic networks and recently in cell signalling.

See also

Related Research Articles

<span class="mw-page-title-main">Electroweak interaction</span> Unified description of electromagnetism and the weak interaction

In particle physics, the electroweak interaction or electroweak force is the unified description of two of the four known fundamental interactions of nature: electromagnetism (electromagnetic interaction) and the weak interaction. Although these two forces appear very different at everyday low energies, the theory models them as two different aspects of the same force. Above the unification energy, on the order of 246 GeV, they would merge into a single force. Thus, if the temperature is high enough – approximately 1015 K – then the electromagnetic force and weak force merge into a combined electroweak force. During the quark epoch (shortly after the Big Bang), the electroweak force split into the electromagnetic and weak force. It is thought that the required temperature of 1015 K has not been seen widely throughout the universe since before the quark epoch, and currently the highest human-made temperature in thermal equilibrium is around 5.5x1012 K (from the Large Hadron Collider).

A Newtonian fluid is a fluid in which the viscous stresses arising from its flow are at every point linearly correlated to the local strain rate — the rate of change of its deformation over time. Stresses are proportional to the rate of change of the fluid's velocity vector.

<span class="mw-page-title-main">Onsager reciprocal relations</span> Relations between flows and forces, or gradients, in thermodynamic systems

In thermodynamics, the Onsager reciprocal relations express the equality of certain ratios between flows and forces in thermodynamic systems out of equilibrium, but where a notion of local equilibrium exists.

In physics and engineering, a constitutive equation or constitutive relation is a relation between two physical quantities that is specific to a material or substance, and approximates the response of that material to external stimuli, usually as applied fields or forces. They are combined with other equations governing physical laws to solve physical problems; for example in fluid mechanics the flow of a fluid in a pipe, in solid state physics the response of a crystal to an electric field, or in structural analysis, the connection between applied stresses or loads to strains or deformations.

A Hopfield network is a form of recurrent artificial neural network and a type of spin glass system popularised by John Hopfield in 1982 as described by Shun'ichi Amari in 1972 and by Little in 1974 based on Ernst Ising's work with Wilhelm Lenz on the Ising model. Hopfield networks serve as content-addressable ("associative") memory systems with binary threshold nodes, or with continuous variables. Hopfield networks also provide a model for understanding human memory.

<span class="mw-page-title-main">Wheeler–DeWitt equation</span> Field equation, part of a theory that attempts to combine quantum mechanics and general relativity

The Wheeler–DeWitt equation for theoretical physics and applied mathematics, is a field equation attributed to John Archibald Wheeler and Bryce DeWitt. The equation attempts to mathematically combine the ideas of quantum mechanics and general relativity, a step towards a theory of quantum gravity.

The principle of detailed balance can be used in kinetic systems which are decomposed into elementary processes. It states that at equilibrium, each elementary process is in equilibrium with its reverse process.

In physics, precisely in the study of the theory of general relativity and many alternatives to it, the post-Newtonian formalism is a calculational tool that expresses Einstein's (nonlinear) equations of gravity in terms of the lowest-order deviations from Newton's law of universal gravitation. This allows approximations to Einstein's equations to be made in the case of weak fields. Higher-order terms can be added to increase accuracy, but for strong fields, it may be preferable to solve the complete equations numerically. Some of these post-Newtonian approximations are expansions in a small parameter, which is the ratio of the velocity of the matter forming the gravitational field to the speed of light, which in this case is better called the speed of gravity. In the limit, when the fundamental speed of gravity becomes infinite, the post-Newtonian expansion reduces to Newton's law of gravity.

In theoretical physics, the Wess–Zumino model has become the first known example of an interacting four-dimensional quantum field theory with linearly realised supersymmetry. In 1974, Julius Wess and Bruno Zumino studied, using modern terminology, dynamics of a single chiral superfield whose cubic superpotential leads to a renormalizable theory.

<span class="mw-page-title-main">Canonical quantum gravity</span> A formulation of general relativity

In physics, canonical quantum gravity is an attempt to quantize the canonical formulation of general relativity. It is a Hamiltonian formulation of Einstein's general theory of relativity. The basic theory was outlined by Bryce DeWitt in a seminal 1967 paper, and based on earlier work by Peter G. Bergmann using the so-called canonical quantization techniques for constrained Hamiltonian systems invented by Paul Dirac. Dirac's approach allows the quantization of systems that include gauge symmetries using Hamiltonian techniques in a fixed gauge choice. Newer approaches based in part on the work of DeWitt and Dirac include the Hartle–Hawking state, Regge calculus, the Wheeler–DeWitt equation and loop quantum gravity.

<span class="mw-page-title-main">Metabolic control analysis</span> Metabolic control

Metabolic control analysis (MCA) is a mathematical framework for describing metabolic, signaling, and genetic pathways. MCA quantifies how variables, such as fluxes and species concentrations, depend on network parameters. In particular, it is able to describe how network-dependent properties, called control coefficients, depend on local properties called elasticities or Elasticity Coefficients.

<span class="mw-page-title-main">Gaussian network model</span>

The Gaussian network model (GNM) is a representation of a biological macromolecule as an elastic mass-and-spring network to study, understand, and characterize the mechanical aspects of its long-time large-scale dynamics. The model has a wide range of applications from small proteins such as enzymes composed of a single domain, to large macromolecular assemblies such as a ribosome or a viral capsid. Protein domain dynamics plays key roles in a multitude of molecular recognition and cell signalling processes. Protein domains, connected by intrinsically disordered flexible linker domains, induce long-range allostery via protein domain dynamics. The resultant dynamic modes cannot be generally predicted from static structures of either the entire protein or individual domains.

The rate of a chemical reaction is influenced by many different factors, such as temperature, pH, reactant, and product concentrations and other effectors. The degree to which these factors change the reaction rate is described by the elasticity coefficient. This coefficient is defined as follows:

<span class="mw-page-title-main">Diffusion</span> Transport of dissolved species from the highest to the lowest concentration region

Diffusion is the net movement of anything generally from a region of higher concentration to a region of lower concentration. Diffusion is driven by a gradient in Gibbs free energy or chemical potential. It is possible to diffuse "uphill" from a region of lower concentration to a region of higher concentration, as in spinodal decomposition. Diffusion is a stochastic process due to the inherent randomness of the diffusing entity and can be used to model many real-life stochastic scenarios. Therefore, diffusion and the corresponding mathematical models are used in several fields beyond physics, such as statistics, probability theory, information theory, neural networks, finance, and marketing.

In statistics, the Johansen test, named after Søren Johansen, is a procedure for testing cointegration of several, say k, I(1) time series. This test permits more than one cointegrating relationship so is more generally applicable than the Engle–Granger test which is based on the Dickey–Fuller test for unit roots in the residuals from a single (estimated) cointegrating relationship.

<span class="mw-page-title-main">Stefan Schuster</span> German biophysicist

Stefan Schuster is a German biophysicist. He is professor for bioinformatics at the University of Jena.

K-epsilon (k-ε) turbulence model is one of the most common models used in computational fluid dynamics (CFD) to simulate mean flow characteristics for turbulent flow conditions. It is a two equation model that gives a general description of turbulence by means of two transport equations. The original impetus for the K-epsilon model was to improve the mixing-length model, as well as to find an alternative to algebraically prescribing turbulent length scales in moderate to high complexity flows.

Bimetric gravity or bigravity refers to two different classes of theories. The first class of theories relies on modified mathematical theories of gravity in which two metric tensors are used instead of one. The second metric may be introduced at high energies, with the implication that the speed of light could be energy-dependent, enabling models with a variable speed of light.

<span class="mw-page-title-main">Eberhard Voit</span> American biologist (born 1953)

Eberhard O. Voit is a Professor and David D. Flanagan Chair in Biological Systems at the Georgia Institute of Technology and a Georgia Research Alliance Eminent Scholar. He leads the Laboratory for Biological Systems Analysis.

In biochemistry, a rate-limiting step is a step that controls the rate of a series of biochemical reactions. The statement is, however, a misunderstanding of how a sequence of enzyme catalyzed reaction steps operate. Rather than a single step controlling the rate, it has been discovered that multiple steps control the rate. Moreover, each controlling step controls the rate to varying degrees.

References

Literature

Books:

Scientific articles: