# Linear response function

Last updated

A linear response function describes the input-output relationship of a signal transducer such as a radio turning electromagnetic waves into music or a neuron turning synaptic input into a response. Because of its many applications in information theory, physics and engineering there exist alternative names for specific linear response functions such as susceptibility, impulse response or impedance, see also transfer function. The concept of a Green's function or fundamental solution of an ordinary differential equation is closely related.

A neuron, also known as a neurone and nerve cell, is an electrically excitable cell that communicates with other cells via specialized connections called synapses. All animals except sponges and placozoans have neurons, but other multicellular organisms such as plants do not. A neuron is the main component of nervous tissue.

In the nervous system, a synapse is a structure that permits a neuron to pass an electrical or chemical signal to another neuron or to the target effector cell.

Information theory studies the quantification, storage, and communication of information. It was originally proposed by Claude Shannon in 1948 to find fundamental limits on signal processing and communication operations such as data compression, in a landmark paper entitled "A Mathematical Theory of Communication". Applications of fundamental topics of information theory include lossless data compression, lossy data compression, and channel coding. Its impact has been crucial to the success of the Voyager missions to deep space, the invention of the compact disc, the feasibility of mobile phones, the development of the Internet, the study of linguistics and of human perception, the understanding of black holes, and numerous other fields.

## Mathematical definition

Denote the input of a system by ${\displaystyle h(t)}$ (e.g. a force), and the response of the system by ${\displaystyle x(t)}$ (e.g. a position). Generally, the value of ${\displaystyle x(t)}$ will depend not only on the present value of ${\displaystyle h(t)}$, but also on past values. Approximately ${\displaystyle x(t)}$ is a weighted sum of the previous values of ${\displaystyle h(t')}$, with the weights given by the linear response function ${\displaystyle \chi (t-t')}$:

In physics, a force is any interaction that, when unopposed, will change the motion of an object. A force can cause an object with mass to change its velocity, i.e., to accelerate. Force can also be described intuitively as a push or a pull. A force has both magnitude and direction, making it a vector quantity. It is measured in the SI unit of newtons and represented by the symbol F.

${\displaystyle x(t)=\int _{-\infty }^{t}dt'\,\chi (t-t')h(t')+\dots \,.}$

The explicit term on the right-hand side is the leading order term of a Volterra expansion for the full nonlinear response. If the system in question is highly non-linear, higher order terms in the expansion, denoted by the dots, become important and the signal transducer cannot adequately be described just by its linear response function.

The Volterra series is a model for non-linear behavior similar to the Taylor series. It differs from the Taylor series in its ability to capture 'memory' effects. The Taylor series can be used for approximating the response of a nonlinear system to a given input if the output of this system depends strictly on the input at that particular time. In the Volterra series the output of the nonlinear system depends on the input to the system at all other times. This provides the ability to capture the 'memory' effect of devices like capacitors and inductors.

The complex-valued Fourier transform ${\displaystyle {\tilde {\chi }}(\omega )}$ of the linear response function is very useful as it describes the output of the system if the input is a sine wave ${\displaystyle h(t)=h_{0}\cdot \sin(\omega t)}$ with frequency ${\displaystyle \omega }$. The output reads

The Fourier transform (FT) decomposes a function of time into its constituent frequencies. This is similar to the way a musical chord can be expressed in terms of the volumes and frequencies of its constituent notes. The term Fourier transform refers to both the frequency domain representation and the mathematical operation that associates the frequency domain representation to a function of time. The Fourier transform of a function of time is itself a complex-valued function of frequency, whose magnitude component represents the amount of that frequency present in the original function, and whose complex argument is the phase offset of the basic sinusoid in that frequency. The Fourier transform is not limited to functions of time, but the domain of the original function is commonly referred to as the time domain. There is also an inverse Fourier transform that mathematically synthesizes the original function from its frequency domain representation.

${\displaystyle x(t)=|{\tilde {\chi }}(\omega )|\cdot h_{0}\cdot \sin(\omega t+\arg {\tilde {\chi }}(\omega ))\,,}$

with amplitude gain ${\displaystyle |{\tilde {\chi }}(\omega )|}$ and phase shift ${\displaystyle \arg {\tilde {\chi }}(\omega )}$.

An amplifier, electronic amplifier or (informally) amp is an electronic device that can increase the power of a signal. It is a two-port electronic circuit that uses electric power from a power supply to increase the amplitude of a signal applied to its input terminals, producing a proportionally greater amplitude signal at its output. The amount of amplification provided by an amplifier is measured by its gain: the ratio of output voltage, current, or power to input. An amplifier is a circuit that has a power gain greater than one.

## Example

Consider a damped harmonic oscillator with input given by an external driving force ${\displaystyle h(t)}$,

${\displaystyle {\ddot {x}}(t)+\gamma {\dot {x}}(t)+\omega _{0}^{2}x(t)=h(t).\,}$

The complex-valued Fourier transform of the linear response function is given by

${\displaystyle {\tilde {\chi }}(\omega )={\frac {{\tilde {x}}(\omega )}{{\tilde {h}}(\omega )}}={\frac {1}{\omega _{0}^{2}-\omega ^{2}+i\gamma \omega }}.\,}$

The amplitude gain is given by the magnitude of the complex number ${\displaystyle {\tilde {\chi }}(\omega ),}$ and the phase shift by the arctan of the imaginary part of the function, divided by the real one.

From this representation, we see that for small ${\displaystyle \gamma }$ the Fourier transform ${\displaystyle {\tilde {\chi }}(\omega )}$ of the linear response function yields a pronounced maximum ("Resonance") at the frequency ${\displaystyle \omega \approx \omega _{0}}$. The linear response function for a harmonic oscillator is mathematically identical to that of an RLC circuit. The width of the maximum ${\displaystyle ,\Delta \omega ,}$ typically is much smaller than ${\displaystyle \omega _{0},}$ so that the Quality factor ${\displaystyle S:=\omega _{0}/\Delta \omega }$ can be extremely large.

## Kubo formula

The exposition of linear response theory, in the context of quantum statistics, can be found in a paper by Ryogo Kubo. [1] This defines particularly the Kubo formula, which considers the general case that the "force" h(t) is a perturbation of the basic operator of the system, the Hamiltonian, ${\displaystyle {\hat {H}}_{0}\to {\hat {H}}_{0}-h(t'){\hat {B}}(t')\,}$ where ${\displaystyle {\hat {B}}}$ corresponds to a measurable quantity as input, while the output x(t) is the perturbation of the thermal expectation of another measurable quantity ${\displaystyle {\hat {A}}(t)}$. The Kubo formula then defines the quantum-statistical calculation of the susceptibility ${\displaystyle \chi (t-t')}$ by a general formula involving only the mentioned operators.

As a consequence of the principle of causality the complex-valued function ${\displaystyle {\tilde {\chi }}(\omega )}$ has poles only in the lower half-plane. This leads to the Kramers–Kronig relations, which relates the real and the imaginary parts of ${\displaystyle {\tilde {\chi }}(\omega )}$ by integration. The simplest example is once more the damped harmonic oscillator. [2]

## Related Research Articles

In classical mechanics, a harmonic oscillator is a system that, when displaced from its equilibrium position, experiences a restoring force F proportional to the displacement x:

In engineering, a transfer function of an electronic or control system component is a mathematical function which theoretically models the device's output for each possible input. In its simplest form, this function is a two-dimensional graph of an independent scalar input versus the dependent scalar output, called a transfer curve or characteristic curve. Transfer functions for components are used to design and analyze systems assembled from components, particularly using the block diagram technique, in electronics and control theory.

In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of their derivative and antiderivative. It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be readily derived by integrating the product rule of differentiation.

In electrical engineering and control theory, a Bode plot is a graph of the frequency response of a system. It is usually a combination of a Bode magnitude plot, expressing the magnitude of the frequency response, and a Bode phase plot, expressing the phase shift.

Fourier optics is the study of classical optics using Fourier transforms (FTs), in which the waveform being considered is regarded as made up of a combination, or superposition, of plane waves. It has some parallels to the Huygens–Fresnel principle, in which the wavefront is regarded as being made up of a combination of spherical wavefronts whose sum is the wavefront being studied. A key difference is that Fourier optics considers the plane waves to be natural modes of the propagation medium, as opposed to Huygens–Fresnel, where the spherical waves originate in the physical medium.

In signal processing, a finite impulse response (FIR) filter is a filter whose impulse response is of finite duration, because it settles to zero in finite time. This is in contrast to infinite impulse response (IIR) filters, which may have internal feedback and may continue to respond indefinitely.

In control theory and signal processing, a linear, time-invariant system is said to be minimum-phase if the system and its inverse are causal and stable.

The fluctuation–dissipation theorem (FDT) or fluctuation–dissipation relation (FDR) is a powerful tool in statistical physics for predicting the behavior of systems that obey detailed balance. Given that a system obeys detailed balance, the theorem is a general proof that thermodynamic fluctuations in a physical variable predict the response quantified by the admittance or impedance of the same physical variable, and vice versa. The fluctuation–dissipation theorem applies both to classical and quantum mechanical systems.

In mathematics and in signal processing, the Hilbert transform is a specific linear operator that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t). This linear operator is given by convolution with the function :

Linear phase is a property of a filter, where the phase response of the filter is a linear function of frequency. The result is that all frequency components of the input signal are shifted in time by the same constant amount, which is referred to as the group delay. And consequently, there is no phase distortion due to the time delay of frequencies relative to one another.

Linear time-invariant theory, commonly known as LTI system theory, investigates the response of a linear and time-invariant system to an arbitrary input signal. Trajectories of these systems are commonly measured and tracked as they move through time, but in applications like image processing and field theory, the LTI systems also have trajectories in spatial dimensions. Thus, these systems are also called linear translation-invariant to give the theory the most general reach. In the case of generic discrete-time systems, linear shift-invariant is the corresponding term. A good example of LTI systems are electrical circuits that can be made up of resistors, capacitors, and inductors.. It has been used in applied mathematics and has direct applications in NMR spectroscopy, seismology, circuits, signal processing, control theory, and other technical areas.

Nondimensionalization is the partial or full removal of units from an equation involving physical quantities by a suitable substitution of variables. This technique can simplify and parameterize problems where measured units are involved. It is closely related to dimensional analysis. In some physical systems, the term scaling is used interchangeably with nondimensionalization, in order to suggest that certain quantities are better measured relative to some appropriate unit. These units refer to quantities intrinsic to the system, rather than units such as SI units. Nondimensionalization is not the same as converting extensive quantities in an equation to intensive quantities, since the latter procedure results in variables that still carry units.

The Kramers–Kronig relations are bidirectional mathematical relations, connecting the real and imaginary parts of any complex function that is analytic in the upper half-plane. These relations are often used to calculate the real part from the imaginary part of response functions in physical systems, because for stable systems, causality implies the analyticity condition, and conversely, analyticity implies causality of the corresponding stable physical system. The relation is named in honor of Ralph Kronig and Hans Kramers. In mathematics these relations are known under the names Sokhotski–Plemelj theorem and Hilbert transform.

In signal processing, a causal filter is a linear and time-invariant causal system. The word causal indicates that the filter output depends only on past and present inputs. A filter whose output also depends on future inputs is non-causal, whereas a filter whose output depends only on future inputs is anti-causal. Systems that are realizable must be causal because such systems cannot act on a future input. In effect that means the output sample that best represents the input at time comes out slightly later. A common design practice for digital filters is to create a realizable filter by shortening and/or time-shifting a non-causal impulse response. If shortening is necessary, it is often accomplished as the product of the impulse-response with a window function.

In numerical analysis, the split-step (Fourier) method is a pseudo-spectral numerical method used to solve nonlinear partial differential equations like the nonlinear Schrödinger equation. The name arises for two reasons. First, the method relies on computing the solution in small steps, and treating the linear and the nonlinear steps separately. Second, it is necessary to Fourier transform back and forth because the linear step is made in the frequency domain while the nonlinear step is made in the time domain.

In electronics, complex gain is the effect that circuitry has on the amplitude and phase of a sine wave signal. The term complex is used because mathematically this effect can be expressed as a complex number.

In physics, nonlinear resonance is the occurrence of resonance in a nonlinear system. In nonlinear resonance the system behaviour – resonance frequencies and modes – depends on the amplitude of the oscillations, while for linear systems this is independent of amplitude.

Phase stretch transform (PST) is a computational approach to signal and image processing. One of its utilities is for feature detection and classification. PST is related to time stretch dispersive Fourier transform. It transforms the image by emulating propagation through a diffractive medium with engineered 3D dispersive property. The operation relies on symmetry of the dispersion profile and can be understood in terms of dispersive eigenfunctions or stretch modes. PST performs similar functionality as phase-contrast microscopy, but on digital images. PST can be applied to digital images and temporal data.

In mathematics, the exponential response formula (ERF), also known as exponential response and complex replacement, is a method used to find a particular solution of a non-homogeneous linear ordinary differential equation of any order. The exponential response formula is applicable to non-homogeneous linear ordinary differential equations with constant coefficients if the function is polynomial, sinusoidal, exponential or the combination of the three. The general solution of a non-homogeneous linear ordinary differential equation is a superposition of the general solution of the associated homogeneous ODE and a particular solution to the non-homogeneous ODE. Alternative methods for solving ordinary differential equations of higher order are method of undetermined coefficients and method of variation of parameters.

## References

1. Kubo, R., Statistical Mechanical Theory of Irreversible Processes I, Journal of the Physical Society of Japan, vol. 12, pp. 570–586 (1957).
2. De Clozeaux,Linear Response Theory, in: E. Antončik et al., Theory of condensed matter, IAEA Vienna, 1968