In mathematics and particularly in dynamic systems, an **initial condition**, in some contexts called a **seed value**,^{ [1] }^{:pp. 160} is a value of an evolving variable at some point in time designated as the initial time (typically denoted *t* = 0). For a system of order *k* (the number of time lags in discrete time, or the order of the largest derivative in continuous time) and dimension *n* (that is, with *n* different evolving variables, which together can be denoted by an *n*-dimensional coordinate vector), generally *nk* initial conditions are needed in order to trace the system's variables forward through time.

In both differential equations in continuous time and difference equations in discrete time, initial conditions affect the value of the dynamic variables (state variables) at any future time. In continuous time, the problem of finding a closed form solution for the state variables as a function of time and of the initial conditions is called the initial value problem. A corresponding problem exists for discrete time situations. While a closed form solution is not always possible to obtain, future values of a discrete time system can be found by iterating forward one time period per iteration, though rounding error may make this impractical over long horizons.

A linear matrix difference equation of the homogeneous (having no constant term) form has closed form solution predicated on the vector of initial conditions on the individual variables that are stacked into the vector; is called the vector of initial conditions or simply the initial condition, and contains *nk* pieces of information, *n* being the dimension of the vector *X* and *k* = 1 being the number of time lags in the system. The initial conditions in this linear system do not affect the qualitative nature of the future behavior of the state variable *X*; that behavior is stable or unstable based on the eigenvalues of the matrix *A* but not based on the initial conditions.

Alternatively, a dynamic process in a single variable *x* having multiple time lags is

Here the dimension is *n* = 1 and the order is *k*, so the necessary number of initial conditions to trace the system through time, either iteratively or via closed form solution, is *nk* = *k*. Again the initial conditions do not affect the qualitative nature of the variable's long-term evolution. The solution of this equation is found by using its characteristic equation to obtain the latter's *k* solutions, which are the characteristic values for use in the solution equation

Here the constants are found by solving a system of *k* different equations based on this equation, each using one of *k* different values of *t* for which the specific initial condition Is known.

A differential equation system of the first order with *n* variables stacked in a vector *X* is

Its behavior through time can be traced with a closed form solution conditional on an initial condition vector . The number of required initial pieces of information is the dimension *n* of the system times the order *k* = 1 of the system, or *n*. The initial conditions do not affect the qualitative behavior (stable or unstable) of the system.

A single *k*^{th} order linear equation in a single variable *x* is

Here the number of initial conditions necessary for obtaining a closed form solution is the dimension *n* = 1 times the order *k*, or simply *k*. In this case the *k* initial pieces of information will typically not be different values of the variable *x* at different points in time, but rather the values of *x* and its first *k* – 1 derivatives, all at some point in time such as time zero. The initial conditions do not affect the qualitative nature of the system's behavior. The characteristic equation of this dynamic equation is whose solutions are the characteristic values these are used in the solution equation

This equation and its first *k* – 1 derivatives form a system of *k* equations that can be solved for the *k* parameters given the known initial conditions on *x* and its *k* – 1 derivatives' values at some time *t*.

Nonlinear systems can exhibit a substantially richer variety of behavior than linear systems can. In particular, the initial conditions can affect whether the system diverges to infinity or whether it converges to one or another attractor of the system. Each attractor, a (possibly disconnected) region of values that some dynamic paths approach but never leave, has a (possibly disconnected) basin of attraction such that state variables with initial conditions in that basin (and nowhere else) will evolve toward that attractor. Even nearby initial conditions could be in basins of attraction of different attractors (see for example Newton's method#Basins of attraction).

Moreover, in those nonlinear systems showing chaotic behavior, the evolution of the variables exhibits sensitive dependence on initial conditions: the iterated values of any two very nearby points on the same strange attractor, while each remaining on the attractor, will diverge from each other over time. Thus even on a single attractor the precise values of the initial conditions make a substantial difference for the future positions of the iterates. This feature makes accurate simulation of future values difficult, and impossible over long horizons, because stating the initial conditions with exact precision is seldom possible and because rounding error is inevitable after even only a few iterations from an exact initial condition.

- Boundary condition
- Initialization vector, in cryptography

In mathematics, a **linear map** is a mapping between two vector spaces that preserves the operations of vector addition and scalar multiplication. The same names and the same definition are also used for the more general case of modules over a ring; see Module homomorphism.

In mathematics, a **recurrence relation** is an equation that recursively defines a sequence or multidimensional array of values, once one or more initial terms are given; each further term of the sequence or array is defined as a function of the preceding terms.

In mathematics and physics, the **heat equation** is a certain partial differential equation. Solutions of the heat equation are sometimes known as **caloric functions**. The theory of the heat equation was first developed by Joseph Fourier in 1822 for the purpose of modeling how a quantity such as heat diffuses through a given region.

In mathematics, the **Lyapunov exponent** or **Lyapunov characteristic exponent** of a dynamical system is a quantity that characterizes the rate of separation of infinitesimally close trajectories. Quantitatively, two trajectories in phase space with initial separation vector diverge at a rate given by

In mathematics and its applications, classical **Sturm–Liouville theory** is the theory of real second-order linear ordinary differential equations of the form:

The **Gauss–Newton algorithm** is used to solve non-linear least squares problems. It is a modification of Newton's method for finding a minimum of a function. Unlike Newton's method, the Gauss–Newton algorithm can only be used to minimize a sum of squared function values, but it has the advantage that second derivatives, which can be challenging to compute, are not required.

The **Rössler attractor** is the attractor for the **Rössler system**, a system of three non-linear ordinary differential equations originally studied by Otto Rössler in the 1970s. These differential equations define a continuous-time dynamical system that exhibits chaotic dynamics associated with the fractal properties of the attractor.

In mathematics, a **stiff equation** is a differential equation for which certain numerical methods for solving the equation are numerically unstable, unless the step size is taken to be extremely small. It has proven difficult to formulate a precise definition of stiffness, but the main idea is that the equation includes some terms that can lead to rapid variation in the solution.

In applied mathematics, in particular the context of nonlinear system analysis, a **phase plane** is a visual display of certain characteristics of certain kinds of differential equations; a coordinate plane with axes being the values of the two state variables, say, or etc.. It is a two-dimensional case of the general *n*-dimensional phase space.

In linear algebra, an **eigenvector** or **characteristic vector** of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding **eigenvalue**, often denoted by , is the factor by which the eigenvector is scaled.

In numerical linear algebra, the method of **successive over-relaxation** (**SOR**) is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly converging iterative process.

**Linear dynamical systems** are dynamical systems whose evaluation functions are linear. While dynamical systems, in general, do not have closed-form solutions, linear dynamical systems can be solved exactly, and they have a rich set of mathematical properties. Linear systems can also be used to understand the qualitative behavior of general dynamical systems, by calculating the equilibrium points of the system and approximating it as a linear system around each such point.

In mathematics, **stability theory** addresses the stability of solutions of differential equations and of trajectories of dynamical systems under small perturbations of initial conditions. The heat equation, for example, is a stable partial differential equation because small perturbations of initial data lead to small variations in temperature at a later time as a result of the maximum principle. In partial differential equations one may measure the distances between functions using L^{p} norms or the sup norm, while in differential geometry one may measure the distance between spaces using the Gromov–Hausdorff distance.

**Covariance matrix adaptation evolution strategy (CMA-ES)** is a particular kind of strategy for numerical optimization. Evolution strategies (ES) are stochastic, derivative-free methods for numerical optimization of non-linear or non-convex continuous optimization problems. They belong to the class of evolutionary algorithms and evolutionary computation. An evolutionary algorithm is broadly based on the principle of biological evolution, namely the repeated interplay of variation and selection: in each generation (iteration) new individuals are generated by variation, usually in a stochastic way, of the current parental individuals. Then, some individuals are selected to become the parents in the next generation based on their fitness or objective function value . Like this, over the generation sequence, individuals with better and better -values are generated.

**Numerical continuation** is a method of computing approximate solutions of a system of parameterized nonlinear equations,

In linear algebra, **eigendecomposition** or sometimes **spectral decomposition** is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way.

A differential equation is a mathematical equation for an unknown function of one or several variables that relates the values of the function itself and of its derivatives of various orders. A **matrix differential equation** contains more than one function stacked into vector form with a matrix relating the functions to their derivatives.

A **matrix difference equation** is a difference equation in which the value of a vector of variables at one point in time is related to its own value at one or more previous points in time, using matrices. The **order** of the equation is the maximum time gap between any two indicated values of the variable vector. For example,

In mathematics, **inertial manifolds** are concerned with the long term behavior of the solutions of dissipative dynamical systems. Inertial manifolds are finite-dimensional, smooth, invariant manifolds that contain the global attractor and attract all solutions exponentially quickly. Since an inertial manifold is finite-dimensional even if the original system is infinite-dimensional, and because most of the dynamics for the system takes place on the inertial manifold, studying the dynamics on an inertial manifold produces a considerable simplification in the study of the dynamics of the original system.

In mathematics and in particular dynamical systems, a **linear difference equation** or **linear recurrence relation** sets equal to 0 a polynomial that is linear in the various iterates of a variable—that is, in the values of the elements of a sequence. The polynomial's linearity means that each of its terms has degree 0 or 1. Usually the context is the evolution of some variable over time, with the current time period or discrete moment in time denoted as t, one period earlier denoted as *t* − 1, one period later as *t* + 1, etc.

- ↑ Baumol, William J. (1970).
*Economic Dynamics: An Introduction*(3rd ed.). London: Collier-Macmillan. ISBN 0-02-306660-1.

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.