Operational calculus, also known as operational analysis, is a technique by which problems in analysis, in particular differential equations, are transformed into algebraic problems, usually the problem of solving a polynomial equation.
The idea of representing the processes of calculus, differentiation and integration, as operators has a long history that goes back to Gottfried Wilhelm Leibniz. The mathematician Louis François Antoine Arbogast was one of the first to manipulate these symbols independently of the function to which they were applied. [1]
This approach was further developed by Francois-Joseph Servois who developed convenient notations. [2] Servois was followed by a school of British and Irish mathematicians including Charles James Hargreave, George Boole, Bownin, Carmichael, Doukin, Graves, Murphy, William Spottiswoode and Sylvester.
Treatises describing the application of operator methods to ordinary and partial differential equations were written by Robert Bell Carmichael in 1855 [3] and by Boole in 1859. [4]
This technique was fully developed by the physicist Oliver Heaviside in 1893, in connection with his work in telegraphy.
At the time, Heaviside's methods were not rigorous, and his work was not further developed by mathematicians. Operational calculus first found applications in electrical engineering problems, for the calculation of transients in linear circuits after 1910, under the impulse of Ernst Julius Berg, John Renshaw Carson and Vannevar Bush.
A rigorous mathematical justification of Heaviside's operational methods came only after the work of Bromwich that related operational calculus with Laplace transformation methods (see the books by Jeffreys, by Carslaw or by MacLachlan for a detailed exposition). Other ways of justifying the operational methods of Heaviside were introduced in the mid-1920s using integral equation techniques (as done by Carson) or Fourier transformation (as done by Norbert Wiener).
A different approach to operational calculus was developed in the 1930s by Polish mathematician Jan Mikusiński, using algebraic reasoning.
Norbert Wiener laid the foundations for operator theory in his review of the existential status of the operational calculus in 1926: [6]
The key element of the operational calculus is to consider differentiation as an operator p = d/dt acting on functions. Linear differential equations can then be recast in the form of "functions" F(p) of the operator p acting on the unknown function equaling the known function. Here, F is defining something that takes in an operator p and returns another operator F(p).
Solutions are then obtained by making the inverse operator of F act on the known function. The operational calculus generally is typified by two symbols: the operator p, and the unit function 1. The operator in its use probably is more mathematical than physical, the unit function more physical than mathematical. The operator p in the Heaviside calculus initially is to represent the time differentiator d/dt. Further, it is desired for this operator to bear the reciprocal relation such that p−1 denotes the operation of integration. [5]
In electrical circuit theory, one is trying to determine the response of an electrical circuit to an impulse. Due to linearity, it is enough to consider a unit step function H(t), such that H(t) = 0 if t < 0, and H(t) = 1 if t > 0.
The simplest example of application of the operational calculus is to solve: p y = H(t), which gives
From this example, one sees that represents integration. Furthermore n iterated integrations is represented by so that
Continuing to treat p as if it were a variable, which can be rewritten by using a geometric series expansion:
Using partial fraction decomposition, one can define any fraction in the operator p and compute its action on H(t).
Moreover, if the function 1/F(p) has a series expansion of the form it is straightforward to find
Applying this rule, solving any linear differential equation is reduced to a purely algebraic problem.
Heaviside went further and defined fractional power of p, thus establishing a connection between operational calculus and fractional calculus.
Using the Taylor expansion, one can also verify the Lagrange–Boole translation formula, eapf(t) = f(t + a), so the operational calculus is also applicable to finite-difference equations and to electrical engineering problems with delayed signals.
In mathematics, the Laplace transform, named after Pierre-Simon Laplace, is an integral transform that converts a function of a real variable to a function of a complex variable .
In mathematics, an operator is generally a mapping or function that acts on elements of a space to produce elements of another space. There is no general definition of an operator, but the term is often used in place of function when the domain is a set of functions or other structured objects. Also, the domain of an operator is often difficult to characterize explicitly, and may be extended so as to act on related objects.
Oliver Heaviside was an English mathematician and physicist who invented a new technique for solving differential equations, independently developed vector calculus, and rewrote Maxwell's equations in the form commonly used today. He significantly shaped the way Maxwell's equations are understood and applied in the decades following Maxwell's death. His formulation of the telegrapher's equations became commercially important during his own lifetime, after their significance went unremarked for a long while, as few others were versed at the time in his novel methodology. Although at odds with the scientific establishment for most of his life, Heaviside changed the face of telecommunications, mathematics, and science.
In engineering, a transfer function of a system, sub-system, or component is a mathematical function that models the system's output for each possible input. It is widely used in electronic engineering tools like circuit simulators and control systems. In simple cases, this function can be represented as a two-dimensional graph of an independent scalar input versus the dependent scalar output. Transfer functions for components are used to design and analyze systems assembled from components, particularly using the block diagram technique, in electronics and control theory.
A finite difference is a mathematical expression of the form f (x + b) − f (x + a). If a finite difference is divided by b − a, one gets a difference quotient. The approximation of derivatives by finite differences plays a central role in finite difference methods for the numerical solution of differential equations, especially boundary value problems.
The Heaviside step function, or the unit step function, usually denoted by H or θ, is a step function named after Oliver Heaviside, the value of which is zero for negative arguments and one for positive arguments. Different conventions concerning the value H(0) are in use. It is an example of the general class of step functions, all of which can be represented as linear combinations of translations of this one.
In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols , (where is the nabla operator), or . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian Δf (p) of a function f at a point p measures by how much the average value of f over small spheres or balls centered at p deviates from f (p).
In mathematics, the Hermite polynomials are a classical orthogonal polynomial sequence.
Fractional calculus is a branch of mathematical analysis that studies the several different possibilities of defining real number powers or complex number powers of the differentiation operator
In fractional calculus, an area of mathematical analysis, the differintegral is a combined differentiation/integration operator. Applied to a function ƒ, the q-differintegral of f, here denoted by
In mathematics, and in particular functional analysis, the shift operator, also known as the translation operator, is an operator that takes a function x ↦ f(x) to its translationx ↦ f(x + a). In time series analysis, the shift operator is called the lag operator.
In mathematical analysis, asymptotic analysis, also known as asymptotics, is a method of describing limiting behavior.
In probability theory and related fields, Malliavin calculus is a set of mathematical techniques and ideas that extend the mathematical field of calculus of variations from deterministic functions to stochastic processes. In particular, it allows the computation of derivatives of random variables. Malliavin calculus is also called the stochastic calculus of variations. P. Malliavin first initiated the calculus on infinite dimensional space. Then, the significant contributors such as S. Kusuoka, D. Stroock, J-M. Bismut, Shinzo Watanabe, I. Shigekawa, and so on finally completed the foundations.
The rectangular function is defined as
A stochastic differential equation (SDE) is a differential equation in which one or more of the terms is a stochastic process, resulting in a solution which is also a stochastic process. SDEs have many applications throughout pure mathematics and are used to model various behaviours of stochastic models such as stock prices, random growth models or physical systems that are subjected to thermal fluctuations.
In mathematics, the derivative is a fundamental construction of differential calculus and admits many possible generalizations within the fields of mathematical analysis, combinatorics, algebra, geometry, etc.
In mathematics, the inverse scattering transform is a method that solves the initial value problem for a nonlinear partial differential equation using mathematical methods related to wave scattering. The direct scattering transform describes how a function scatters waves or generates bound-states. The inverse scattering transform uses wave scattering data to construct the function responsible for wave scattering. The direct and inverse scattering transforms are analogous to the direct and inverse Fourier transforms which are used to solve linear partial differential equations.
The ramp function is a unary real function, whose graph is shaped like a ramp. It can be expressed by numerous definitions, for example "0 for negative inputs, output equals input for non-negative inputs". The term "ramp" can also be used for other functions obtained by scaling and shifting, and the function in this article is the unit ramp function.
In differential calculus, there is no single uniform notation for differentiation. Instead, various notations for the derivative of a function or variable have been proposed by various mathematicians. The usefulness of each notation varies with the context, and it is sometimes advantageous to use more than one notation in a given context. The most common notations for differentiation are listed below.
In mathematics, Katugampola fractional operators are integral operators that generalize the Riemann–Liouville and the Hadamard fractional operators into a unique form. The Katugampola fractional integral generalizes both the Riemann–Liouville fractional integral and the Hadamard fractional integral into a single form and It is also closely related to the Erdelyi–Kober operator that generalizes the Riemann–Liouville fractional integral. Katugampola fractional derivative has been defined using the Katugampola fractional integral and as with any other fractional differential operator, it also extends the possibility of taking real number powers or complex number powers of the integral and differential operators.
{{cite book}}
: CS1 maint: location missing publisher (link){{cite book}}
: CS1 maint: location missing publisher (link)with an appendix by Norbert Wiener