In mathematics, Pfaffian functions are a certain class of functions whose derivative can be written in terms of the original function. They were originally introduced by Askold Khovanskii in the 1970s, but are named after German mathematician Johann Pfaff.
Some functions, when differentiated, give a result which can be written in terms of the original function. Perhaps the simplest example is the exponential function, f(x) = ex. If we differentiate this function we get ex again, that is
Another example of a function like this is the reciprocal function, g(x) = 1/x. If we differentiate this function we will see that
Other functions may not have the above property, but their derivative may be written in terms of functions like those above. For example, if we take the function h(x) = ex log x then we see
Functions like these form the links in a so-called Pfaffian chain. Such a chain is a sequence of functions, say f1, f2, f3, etc., with the property that if we differentiate any of the functions in this chain then the result can be written in terms of the function itself and all the functions preceding it in the chain (specifically as a polynomial in those functions and the variables involved). So with the functions above we have that f, g, h is a Pfaffian chain.
A Pfaffian function is then just a polynomial in the functions appearing in a Pfaffian chain and the function argument. So with the Pfaffian chain just mentioned, functions such as F(x) = x3f(x)2 − 2g(x)h(x) are Pfaffian.
Let U be an open domain in Rn. A Pfaffian chain of order r ≥ 0 and degree α ≥ 1 in U is a sequence of real analytic functions f1,..., fr in U satisfying differential equations
for i = 1, ..., r where Pi, j ∈ R[x1, ..., xn, y1, ..., yi] are polynomials of degree ≤ α. A function f on U is called a Pfaffian function of order r and degree (α, β) if
where P ∈ R[x1, ..., xn, y1, ..., yr] is a polynomial of degree at most β ≥ 1. The numbers r, α, and β are collectively known as the format of the Pfaffian function, and give a useful measure of its complexity.
Consider the structure R = (R, +, −, ·, <, 0, 1), the ordered field of real numbers. In the 1960s Andrei Gabrielov proved that the structure obtained by starting with R and adding a function symbol for every analytic function restricted to the unit box [0, 1]m is model complete. [2] That is, any set definable in this structure Ran was just the projection of some higher-dimensional set defined by identities and inequalities involving these restricted analytic functions.
In the 1990s, Alex Wilkie showed that one has the same result if instead of adding every restricted analytic function, one just adds the unrestricted exponential function to R to get the ordered real field with exponentiation, Rexp, a result known as Wilkie's theorem. [3] Wilkie also tackled the question of which finite sets of analytic functions could be added to R to get a model-completeness result. It turned out that adding any Pfaffian chain restricted to the box [0, 1]m would give the same result. In particular one may add all Pfaffian functions to R to get the structure RPfaff as a variant of Gabrielov's result. The result on exponentiation is not a special case of this result (even though exp is a Pfaffian chain by itself), as it applies to the unrestricted exponential function.
This result of Wilkie's proved that the structure RPfaff is an o-minimal structure.
The equations above that define a Pfaffian chain are said to satisfy a triangular condition, since the derivative of each successive function in the chain is a polynomial in one extra variable. Thus if they are written out in turn a triangular shape appears:
and so on. If this triangularity condition is relaxed so that the derivative of each function in the chain is a polynomial in all the other functions in the chain, then the chain of functions is known as a Noetherian chain, and a function constructed as a polynomial in this chain is called a Noetherian function. [4] So, for example, a Noetherian chain of order three is composed of three functions f1, f2, f3, satisfying the equations
The name stems from the fact that the ring generated by the functions in such a chain is Noetherian. [5]
Any Pfaffian chain is also a Noetherian chain (the extra variables in each polynomial are simply redundant in this case), but not every Noetherian chain is Pfaffian; for example, if we take f1(x) = sin x and f2(x) = cos x then we have the equations
and these hold for all real numbers x, so f1, f2 is a Noetherian chain on all of R. But there is no polynomial P(x, y) such that the derivative of sin x can be written as P(x, sin x), and so this chain is not Pfaffian.
In calculus, Taylor's theorem gives an approximation of a -times differentiable function around a given point by a polynomial of degree , called the -th-order Taylor polynomial. For a smooth function, the Taylor polynomial is the truncation at the order of the Taylor series of the function. The first-order Taylor polynomial is the linear approximation of the function, and the second-order Taylor polynomial is often referred to as the quadratic approximation. There are several versions of Taylor's theorem, some giving explicit estimates of the approximation error of the function by its Taylor polynomial.
In elementary algebra, the quadratic formula is a closed-form expression describing the solutions of a quadratic equation. Other ways of solving quadratic equations, such as completing the square, yield the same solutions.
The method of least squares is a parameter estimation method in regression analysis based on minimizing the sum of the squares of the residuals made in the results of each individual equation.
In the calculus of variations, a field of mathematical analysis, the functional derivative relates a change in a functional to a change in a function on which the functional depends.
In algebra, a quartic function is a function of the form
In mathematics, a Green's function is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions.
In mathematics, the classical orthogonal polynomials are the most widely used orthogonal polynomials: the Hermite polynomials, Laguerre polynomials, Jacobi polynomials.
In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form where a0(x), ..., an(x) and b(x) are arbitrary differentiable functions that do not need to be linear, and y′, ..., y(n) are the successive derivatives of an unknown function y of the variable x.
A directional derivative is a concept in multivariable calculus that measures the rate at which a function changes in a particular direction at a given point.
In mathematics, the Schwarzian derivative is an operator similar to the derivative which is invariant under Möbius transformations. Thus, it occurs in the theory of the complex projective line, and in particular, in the theory of modular forms and hypergeometric functions. It plays an important role in the theory of univalent functions, conformal mapping and Teichmüller spaces. It is named after the German mathematician Hermann Schwarz.
The Gauss–Newton algorithm is used to solve non-linear least squares problems, which is equivalent to minimizing a sum of squared function values. It is an extension of Newton's method for finding a minimum of a non-linear function. Since a sum of squares must be nonnegative, the algorithm can be viewed as using Newton's method to iteratively approximate zeroes of the components of the sum, and thus minimizing the sum. In this sense, the algorithm is also an effective method for solving overdetermined systems of equations. It has the advantage that second derivatives, which can be challenging to compute, are not required.
In mathematics, the Gaussian or ordinary hypergeometric function2F1(a,b;c;z) is a special function represented by the hypergeometric series, that includes many other special functions as specific or limiting cases. It is a solution of a second-order linear ordinary differential equation (ODE). Every second-order linear ODE with three regular singular points can be transformed into this equation.
The Adomian decomposition method (ADM) is a semi-analytical method for solving ordinary and partial nonlinear differential equations. The method was developed from the 1970s to the 1990s by George Adomian, chair of the Center for Applied Mathematics at the University of Georgia. It is further extensible to stochastic systems by using the Ito integral. The aim of this method is towards a unified theory for the solution of partial differential equations (PDE); an aim which has been superseded by the more general theory of the homotopy analysis method. The crucial aspect of the method is employment of the "Adomian polynomials" which allow for solution convergence of the nonlinear portion of the equation, without simply linearizing the system. These polynomials mathematically generalize to a Maclaurin series about an arbitrary external parameter; which gives the solution method more flexibility than direct Taylor series expansion.
In mathematics, Schwartz space is the function space of all functions whose derivatives are rapidly decreasing. This space has the important property that the Fourier transform is an automorphism on this space. This property enables one, by duality, to define the Fourier transform for elements in the dual space of , that is, for tempered distributions. A function in the Schwartz space is sometimes called a Schwartz function.
In classical mechanics, holonomic constraints are relations between the position variables that can be expressed in the following form:
Non-linear least squares is the form of least squares analysis used to fit a set of m observations with a model that is non-linear in n unknown parameters (m ≥ n). It is used in some forms of nonlinear regression. The basis of the method is to approximate the model by a linear one and to refine the parameters by successive iterations. There are many similarities to linear least squares, but also some significant differences. In economic theory, the non-linear least squares method is applied in (i) the probit regression, (ii) threshold regression, (iii) smooth regression, (iv) logistic link regression, (v) Box–Cox transformed regressors ().
In mathematics, Wilkie's theorem is a result by Alex Wilkie about the theory of ordered fields with an exponential function, or equivalently about the geometric nature of exponential varieties.
The purpose of this page is to provide supplementary materials for the ordinary least squares article, reducing the load of the main article with mathematics and improving its accessibility, while at the same time retaining the completeness of exposition.
In mathematics, the Schneider–Lang theorem is a refinement by Lang (1966) of a theorem of Schneider (1949) about the transcendence of values of meromorphic functions. The theorem implies both the Hermite–Lindemann and Gelfond–Schneider theorems, and implies the transcendence of some values of elliptic functions and elliptic modular functions.
In mathematical analysis and its applications, a function of several real variables or real multivariate function is a function with more than one argument, with all arguments being real variables. This concept extends the idea of a function of a real variable to several variables. The "input" variables take real values, while the "output", also called the "value of the function", may be real or complex. However, the study of the complex-valued functions may be easily reduced to the study of the real-valued functions, by considering the real and imaginary parts of the complex function; therefore, unless explicitly specified, only real-valued functions will be considered in this article.