This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these messages)
|
Vector field reconstruction [1] is a method of creating a vector field from experimental or computer-generated data, usually with the goal of finding a differential equation model of the system.
A differential equation model is one that describes the value of dependent variables as they evolve in time or space by giving equations involving those variables and their derivatives with respect to some independent variables, usually time and/or space. An ordinary differential equation is one in which the system's dependent variables are functions of only one independent variable. Many physical, chemical, biological, and electrical systems are well described by ordinary differential equations. Frequently one may assume a system is governed by differential equations, but does not have exact knowledge of the influence of various factors on the state of the system. For instance, one may have an electrical circuit that in theory is described by a system of ordinary differential equations, but due to the tolerance of resistors, variations of the supply voltage, or interference from outside influences, they do not know the exact parameters of the system. For some systems, especially those that support chaos, a small change in parameter values can cause a large change in the behaviour of the system, so an accurate model is extremely important. Therefore, it may be necessary to construct more exact differential equations by building them up based on the actual system performance rather than a theoretical model. Ideally, one would measure all the dynamical variables involved over an extended period of time, using many different initial conditions, then build or fine-tune a differential equation model based on these measurements.
In some cases, one may not know enough about the processes involved in a system to even formulate a model. In other cases, they may have access to only one dynamical variable for their measurements, i.e., if they have a scalar time series. If one only has a scalar time series, they need to use the method of time delay embedding or derivative coordinates to get a large enough set of dynamical variables to describe the system.
In a nutshell, once a set of measurements of the system state over some period of time has been acquired, one then finds the derivatives of these measurements, which forms a local vector field. They can then determine a global vector field consistent with this local field. This is usually done by a least squares fit to the derivative data.
In the best possible case, one has data streams of measurements of all the system variables, equally spaced in time, say
for
beginning at several different initial conditions. Then the task of finding a vector field, and thus a differential equation model consists of fitting functions, for instance, a cubic spline, to the data to obtain a set of continuous-time functions
computing time derivatives dx1/dt, dx2/dt,...,dxk/dt of the functions, then making the least squares fit using some sort of orthogonal basis functions (orthogonal polynomials, radial basis functions, etc.) to each component of the tangent vectors to find a global vector field. A differential equation then can be read off the global vector field.
There are various methods of creating the basis functions for the least squares fit. The most common method is the Gram–Schmidt process. Which creates a set of orthogonal basis vectors, which can then easily be normalized. This method begins by first selecting any standard basis β={v1, v2,...,vn}. Next, set the first vector v1=u1. Then, set u2=v2-proju1v2. This process is repeated for k vectors, with the final vector being uk= vk-Σ(j=1)(k-1)projukvk. This then creates a set of orthogonal standard basis vectors.
The reason for using a standard orthogonal basis rather than a standard basis arises from the creation of the least squares fitting done next. Creating a least-squares fit begins by assuming some function, in the case of the reconstruction of an nth degree polynomial, and fitting the curve to the data using constants. The accuracy of the fit can be increased by increasing the degree of the polynomial being used to fit the data. If a set of non-orthogonal standard basis functions was used, it becomes necessary to recalculate the constant coefficients of the function describing the fit. However, by using the orthogonal set of basis functions, it is not necessary to recalculate the constant coefficients.
Vector field reconstruction has several applications, and many different approaches. Some mathematicians have not only used radial basis functions and polynomials to reconstruct a vector field, but they have also used Lyapunov exponents and singular value decomposition. [2] Gouesbet and Letellier used a multivariate polynomial approximation and least squares to reconstruct their vector field. This method was applied to the Rössler system, and the Lorenz system, as well as thermal lens oscillations.
The Rossler system, Lorenz system, and Thermal lens oscillation follow the differential equations in the standard system as
where F(X,Y,Z) is known as the standard function. [3]
In some situations the model is not very efficient and difficulties can arise if the model has a large number of coefficients and demonstrates a divergent solution. For example, nonautonomous differential equations give the previously described results. [4] In this case, the modification of the standard approach in application gives a better way of further development of global vector reconstruction.
Usually, the system being modelled in this way is a chaotic dynamical system, because chaotic systems explore a large part of the phase space and the estimate of the global dynamics based on the local dynamics will be better than with a system exploring only a small part of the space.
Frequently, one has only a single scalar time series measurement from a system known to have more than one degree of freedom. The time series may not even be from a system variable, but may be instead of a function of all the variables, such as temperature in a stirred tank reactor using several chemical species. In this case, one must use the technique of delay coordinate embedding, [5] where a state vector consisting of the data at time t and several delayed versions of the data is constructed.
A comprehensive review of the topic is available from [6]
In mathematics, an equation is a mathematical formula that expresses the equality of two expressions, by connecting them with the equals sign =. The word equation and its cognates in other languages may have subtly different meanings; for example, in French an équation is defined as containing one or more variables, while in English, any well-formed formula consisting of two expressions related with an equals sign is an equation.
A mathematical model is an abstract description of a concrete system using mathematical concepts and language. The process of developing a mathematical model is termed mathematical modeling. Mathematical models are used in applied mathematics and in the natural sciences and engineering disciplines, as well as in non-physical systems such as the social sciences. It can also be taught as a subject in its own right.
In mathematics, a coefficient is a multiplicative factor involved in some term of a polynomial, a series, or any other type of expression. It may be a number without units, in which case it is known as a numerical factor. It may also be a constant with units of measurement, in which it is known as a constant multiplier. In general, coefficients may be any expression. When the combination of variables and constants is not necessarily involved in a product, it may be called a parameter. For example, the polynomial has coefficients 2, −1, and 3, and the powers of the variable in the polynomial have coefficient parameters , , and .
In mathematics, the term linear is used in two distinct senses for two different properties:
In quantum physics, a wave function is a mathematical description of the quantum state of an isolated quantum system. The most common symbols for a wave function are the Greek letters ψ and Ψ. Wave functions are complex-valued. For example, a wave function might assign a complex number to each point in a region of space. The Born rule provides the means to turn these complex probability amplitudes into actual probabilities. In one common form, it says that the squared modulus of a wave function that depends upon position is the probability density of measuring a particle as being at a given place. The integral of a wavefunction's squared modulus over all the system's degrees of freedom must be equal to 1, a condition called normalization. Since the wave function is complex-valued, only its relative phase and relative magnitude can be measured; its value does not, in isolation, tell anything about the magnitudes or directions of measurable observables. One has to apply quantum operators, whose eigenvalues correspond to sets of possible results of measurements, to the wave function ψ and calculate the statistical distributions for measurable quantities.
In mathematics and science, a nonlinear system is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many other scientists since most systems are inherently nonlinear in nature. Nonlinear dynamical systems, describing changes in variables over time, may appear chaotic, unpredictable, or counterintuitive, contrasting with much simpler linear systems.
In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form where a0(x), ..., an(x) and b(x) are arbitrary differentiable functions that do not need to be linear, and y′, ..., y(n) are the successive derivatives of an unknown function y of the variable x.
In mathematics, a differential equation is an equation that relates one or more unknown functions and their derivatives. In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, and the differential equation defines a relationship between the two. Such relations are common; therefore, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology.
In mathematics, a differential-algebraic system of equations (DAE) is a system of equations that either contains differential equations and algebraic equations, or is equivalent to such a system.
In applied mathematics, in particular the context of nonlinear system analysis, a phase plane is a visual display of certain characteristics of certain kinds of differential equations; a coordinate plane with axes being the values of the two state variables, say (x, y), or (q, p) etc. (any pair of variables). It is a two-dimensional case of the general n-dimensional phase space.
A surrogate model is an engineering method used when an outcome of interest cannot be easily measured or computed, so an approximate mathematical model of the outcome is used instead. Most engineering design problems require experiments and/or simulations to evaluate design objective and constraint functions as a function of design variables. For example, in order to find the optimal airfoil shape for an aircraft wing, an engineer simulates the airflow around the wing for different shape variables. For many real-world problems, however, a single simulation can take many minutes, hours, or even days to complete. As a result, routine tasks such as design optimization, design space exploration, sensitivity analysis and "what-if" analysis become impossible since they require thousands or even millions of simulation evaluations.
In mathematics, an algebraic differential equation is a differential equation that can be expressed by means of differential algebra. There are several such notions, according to the concept of differential algebra used.
Polynomial chaos (PC), also called polynomial chaos expansion (PCE) and Wiener chaos expansion, is a method for representing a random variable in terms of a polynomial function of other random variables. The polynomials are chosen to be orthogonal with respect to the joint probability distribution of these random variables. Note that despite its name, PCE has no immediate connections to chaos theory. The word "chaos" here should be understood as "random".
Non-linear least squares is the form of least squares analysis used to fit a set of m observations with a model that is non-linear in n unknown parameters (m ≥ n). It is used in some forms of nonlinear regression. The basis of the method is to approximate the model by a linear one and to refine the parameters by successive iterations. There are many similarities to linear least squares, but also some significant differences. In economic theory, the non-linear least squares method is applied in (i) the probit regression, (ii) threshold regression, (iii) smooth regression, (iv) logistic link regression, (v) Box–Cox transformed regressors ().
The finite element method (FEM) is a popular method for numerically solving differential equations arising in engineering and mathematical modeling. Typical problem areas of interest include the traditional fields of structural analysis, heat transfer, fluid flow, mass transport, and electromagnetic potential.
In statistics, polynomial regression is a form of regression analysis in which the relationship between the independent variable x and the dependent variable y is modeled as an nth degree polynomial in x. Polynomial regression fits a nonlinear relationship between the value of x and the corresponding conditional mean of y, denoted E(y |x). Although polynomial regression fits a nonlinear model to the data, as a statistical estimation problem it is linear, in the sense that the regression function E(y | x) is linear in the unknown parameters that are estimated from the data. For this reason, polynomial regression is considered to be a special case of multiple linear regression.
Linear least squares (LLS) is the least squares approximation of linear functions to data. It is a set of formulations for solving statistical problems involved in linear regression, including variants for ordinary (unweighted), weighted, and generalized (correlated) residuals. Numerical methods for linear least squares include inverting the matrix of the normal equations and orthogonal decomposition methods.
In mathematics, an ordinary differential equation (ODE) is a differential equation (DE) dependent on only a single independent variable. As with other DE, its unknown(s) consists of one function(s) and involves the derivatives of those functions. The term "ordinary" is used in contrast with partial differential equations (PDEs) which may be with respect to more than one independent variable, and, less commonly, in contrast with stochastic differential equations (SDEs) where the progression is random.
The Harrow–Hassidim–Lloyd algorithm or HHL algorithm is a quantum algorithm for numerically solving a system of linear equations, designed by Aram Harrow, Avinatan Hassidim, and Seth Lloyd. The algorithm estimates the result of a scalar measurement on the solution vector to a given linear system of equations.