Functional equation

Last updated

In mathematics, a functional equation [1] [2] [ irrelevant citation ] is, in the broadest meaning, an equation in which one or several functions appear as unknowns. So, differential equations and integral equations are functional equations. However, a more restricted meaning is often used, where a functional equation is an equation that relates several values of the same function. For example, the logarithm functions are essentially characterized by the logarithmic functional equation

Contents

If the domain of the unknown function is supposed to be the natural numbers, the function is generally viewed as a sequence, and, in this case, a functional equation (in the narrower meaning) is called a recurrence relation. Thus the term functional equation is used mainly for real functions and complex functions. Moreover a smoothness condition is often assumed for the solutions, since without such a condition, most functional equations have very irregular solutions. For example, the gamma function is a function that satisfies the functional equation and the initial value There are many functions that satisfy these conditions, but the gamma function is the unique one that is meromorphic in the whole complex plane, and logarithmically convex for x real and positive (Bohr–Mollerup theorem).

Examples

One feature that all of the examples listed above[ clarification needed ] share in common is that, in each case, two or more known functions (sometimes multiplication by a constant, sometimes addition of two variables, sometimes the identity function) are inside the argument of the unknown functions to be solved for.[ citation needed ]

When it comes to asking for all solutions, it may be the case that conditions from mathematical analysis should be applied; for example, in the case of the Cauchy equation mentioned above, the solutions that are continuous functions are the 'reasonable' ones, while other solutions that are not likely to have practical application can be constructed (by using a Hamel basis for the real numbers as vector space over the rational numbers). The Bohr–Mollerup theorem is another well-known example.

Involutions

The involutions are characterized by the functional equation . These appear in Babbage's functional equation (1820), [3]

Other involutions, and solutions of the equation, include

which includes the previous three as special cases or limits.

Solution

One method of solving elementary functional equations is substitution.[ citation needed ]

Some solutions to functional equations have exploited surjectivity, injectivity, oddness, and evenness.[ citation needed ]

Some functional equations have been solved with the use of ansatzes, mathematical induction.[ citation needed ]

Some classes of functional equations can be solved by computer-assisted techniques.[ vague ] [4]

In dynamic programming a variety of successive approximation methods [5] [6] are used to solve Bellman's functional equation, including methods based on fixed point iterations.

See also

Notes

  1. Rassias, Themistocles M. (2000). Functional Equations and Inequalities. 3300 AA Dordrecht, The Netherlands: Kluwer Academic Publishers. p. 335. ISBN   0-7923-6484-8.{{cite book}}: CS1 maint: location (link)
  2. Czerwik, Stephan (2002). Functional Equations and Inequalities in Several Variables . P O Box 128, Farrer Road, Singapore 912805: World Scientific Publishing Co. p.  410. ISBN   981-02-4837-7.{{cite book}}: CS1 maint: location (link)
  3. Ritt, J. F. (1916). "On Certain Real Solutions of Babbage's Functional Equation". The Annals of Mathematics. 17 (3): 113–122. doi:10.2307/2007270. JSTOR   2007270.
  4. Házy, Attila (2004-03-01). "Solving linear two variable functional equations with computer". Aequationes Mathematicae. 67 (1): 47–62. doi:10.1007/s00010-003-2703-9. ISSN   1420-8903. S2CID   118563768.
  5. Bellman, R. (1957). Dynamic Programming, Princeton University Press.
  6. Sniedovich, M. (2010). Dynamic Programming: Foundations and Principles, Taylor & Francis.

Related Research Articles

<span class="mw-page-title-main">Bessel function</span> Families of solutions to related differential equations

Bessel functions, first defined by the mathematician Daniel Bernoulli and then generalized by Friedrich Bessel, are canonical solutions y(x) of Bessel's differential equation

<span class="mw-page-title-main">Cauchy distribution</span> Probability distribution

The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution, Cauchy–Lorentz distribution, Lorentz(ian) function, or Breit–Wigner distribution. The Cauchy distribution is the distribution of the x-intercept of a ray issuing from with a uniformly distributed angle. It is also the distribution of the ratio of two independent normally distributed random variables with mean zero.

<span class="mw-page-title-main">Gamma function</span> Extension of the factorial function

In mathematics, the gamma function is one commonly used extension of the factorial function to complex numbers. The gamma function is defined for all complex numbers except the non-positive integers. For every positive integer n,

<span class="mw-page-title-main">Laplace's equation</span> Second-order partial differential equation

In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as

<span class="mw-page-title-main">Fourier series</span> Decomposition of periodic functions into sums of simpler sinusoidal forms

A Fourier series is an expansion of a periodic function into a sum of trigonometric functions. The Fourier series is an example of a trigonometric series, but not all trigonometric series are Fourier series. By expressing a function as a sum of sines and cosines, many problems involving the function become easier to analyze because trigonometric functions are well understood. For example, Fourier series were first used by Joseph Fourier to find solutions to the heat equation. This application is possible because the derivatives of trigonometric functions fall into simple patterns. Fourier series cannot be used to approximate arbitrary functions, because most functions have infinitely many terms in their Fourier series, and the series do not always converge. Well-behaved functions, for example smooth functions, have Fourier series that converge to the original function. The coefficients of the Fourier series are determined by integrals of the function multiplied by trigonometric functions, described in Common forms of the Fourier series below.

<span class="mw-page-title-main">Exponentiation</span> Mathematical operation

In mathematics, exponentiation is an operation involving two numbers, the base and the exponent or power. Exponentiation is written as bn, where b is the base and n is the power; this is pronounced as "b (raised) to the n". When n is a positive integer, exponentiation corresponds to repeated multiplication of the base: that is, bn is the product of multiplying n bases:

<span class="mw-page-title-main">Green's function</span> Impulse response of an inhomogeneous linear differential operator

In mathematics, a Green's function is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions.

<span class="mw-page-title-main">Separation of variables</span> Technique for solving differential equations

In mathematics, separation of variables is any of several methods for solving ordinary and partial differential equations, in which algebra allows one to rewrite an equation so that each of two variables occurs on a different side of the equation.

<span class="mw-page-title-main">Tetration</span> Repeated exponentiation

In mathematics, tetration is an operation based on iterated, or repeated, exponentiation. There is no standard notation for tetration, though and the left-exponent xb are common.

In mathematics and its applications, a Sturm–Liouville problem is a second-order linear ordinary differential equation of the form:

In mathematics, a Dirichlet problem is the problem of finding a function which solves a specified partial differential equation (PDE) in the interior of a given region that takes prescribed values on the boundary of the region.

<span class="mw-page-title-main">Propagator</span> Function in quantum field theory showing probability amplitudes of moving particles

In quantum mechanics and quantum field theory, the propagator is a function that specifies the probability amplitude for a particle to travel from one place to another in a given period of time, or to travel with a certain energy and momentum. In Feynman diagrams, which serve to calculate the rate of collisions in quantum field theory, virtual particles contribute their propagator to the rate of the scattering event described by the respective diagram. These may also be viewed as the inverse of the wave operator appropriate to the particle, and are, therefore, often called (causal) Green's functions.

<span class="mw-page-title-main">Barnes G-function</span>

In mathematics, the Barnes G-functionG(z) is a function that is an extension of superfactorials to the complex numbers. It is related to the gamma function, the K-function and the Glaisher–Kinkelin constant, and was named after mathematician Ernest William Barnes. It can be written in terms of the double gamma function.

<span class="mw-page-title-main">Change of variables</span> Mathematical technique for simplification

In mathematics, a change of variables is a basic technique used to simplify problems in which the original variables are replaced with functions of other variables. The intent is that when expressed in new variables, the problem may become simpler, or equivalent to a better understood problem.

The Adomian decomposition method (ADM) is a semi-analytical method for solving ordinary and partial nonlinear differential equations. The method was developed from the 1970s to the 1990s by George Adomian, chair of the Center for Applied Mathematics at the University of Georgia. It is further extensible to stochastic systems by using the Ito integral. The aim of this method is towards a unified theory for the solution of partial differential equations (PDE); an aim which has been superseded by the more general theory of the homotopy analysis method. The crucial aspect of the method is employment of the "Adomian polynomials" which allow for solution convergence of the nonlinear portion of the equation, without simply linearizing the system. These polynomials mathematically generalize to a Maclaurin series about an arbitrary external parameter; which gives the solution method more flexibility than direct Taylor series expansion.

The Wiener–Hopf method is a mathematical technique widely used in applied mathematics. It was initially developed by Norbert Wiener and Eberhard Hopf as a method to solve systems of integral equations, but has found wider use in solving two-dimensional partial differential equations with mixed boundary conditions on the same boundary. In general, the method works by exploiting the complex-analytical properties of transformed functions. Typically, the standard Fourier transform is used, but examples exist using other transforms, such as the Mellin transform.

In mathematics, Maass forms or Maass wave forms are studied in the theory of automorphic forms. Maass forms are complex-valued smooth functions of the upper half plane, which transform in a similar way under the operation of a discrete subgroup of as modular forms. They are eigenforms of the hyperbolic Laplace operator defined on and satisfy certain growth conditions at the cusps of a fundamental domain of . In contrast to modular forms, Maass forms need not be holomorphic. They were studied first by Hans Maass in 1949.

In discrete calculus the indefinite sum operator, denoted by or , is the linear operator, inverse of the forward difference operator . It relates to the forward difference operator as the indefinite integral relates to the derivative. Thus

<span class="mw-page-title-main">Ramanujan's master theorem</span> Mathematical theorem

In mathematics, Ramanujan's Master Theorem, named after Srinivasa Ramanujan, is a technique that provides an analytic expression for the Mellin transform of an analytic function.

In fluid dynamics, the Burgers vortex or Burgers–Rott vortex is an exact solution to the Navier–Stokes equations governing viscous flow, named after Jan Burgers and Nicholas Rott. The Burgers vortex describes a stationary, self-similar flow. An inward, radial flow, tends to concentrate vorticity in a narrow column around the symmetry axis. At the same time, viscous diffusion tends to spread the vorticity. The stationary Burgers vortex arises when the two effects balance.

References