# Mellin transform

Last updated

In mathematics, the Mellin transform is an integral transform that may be regarded as the multiplicative version of the two-sided Laplace transform. This integral transform is closely connected to the theory of Dirichlet series, and is often used in number theory, mathematical statistics, and the theory of asymptotic expansions; it is closely related to the Laplace transform and the Fourier transform, and the theory of the gamma function and allied special functions.

## Contents

The Mellin transform of a function f is

${\displaystyle \left\{{\mathcal {M}}f\right\}(s)=\varphi (s)=\int _{0}^{\infty }x^{s-1}f(x)\,dx.}$

The inverse transform is

${\displaystyle \left\{{\mathcal {M}}^{-1}\varphi \right\}(x)=f(x)={\frac {1}{2\pi i}}\int _{c-i\infty }^{c+i\infty }x^{-s}\varphi (s)\,ds.}$

The notation implies this is a line integral taken over a vertical line in the complex plane, whose real part c is arbitrary provided that it meets certain conditions. Conditions under which this inversion is valid are given in the Mellin inversion theorem.

The transform is named after the Finnish mathematician Hjalmar Mellin.

## Relationship to other transforms

The two-sided Laplace transform may be defined in terms of the Mellin transform by

${\displaystyle \left\{{\mathcal {B}}f\right\}(s)=\left\{{\mathcal {M}}f(-\ln x)\right\}(s)}$

and conversely we can get the Mellin transform from the two-sided Laplace transform by

${\displaystyle \left\{{\mathcal {M}}f\right\}(s)=\left\{{\mathcal {B}}f(e^{-x})\right\}(s).}$

The Mellin transform may be thought of as integrating using a kernel xs with respect to the multiplicative Haar measure, ${\textstyle {\frac {dx}{x}}}$, which is invariant under dilation ${\displaystyle x\mapsto ax}$, so that ${\textstyle {\frac {d(ax)}{ax}}={\frac {dx}{x}};}$ the two-sided Laplace transform integrates with respect to the additive Haar measure ${\displaystyle dx}$, which is translation invariant, so that ${\displaystyle d(x+a)=dx}$.

We also may define the Fourier transform in terms of the Mellin transform and vice versa; in terms of the Mellin transform and of the two-sided Laplace transform defined above

${\displaystyle \left\{{\mathcal {F}}f\right\}(-s)=\left\{{\mathcal {B}}f\right\}(-is)=\left\{{\mathcal {M}}f(-\ln x)\right\}(-is)\ .}$

We may also reverse the process and obtain

${\displaystyle \left\{{\mathcal {M}}f\right\}(s)=\left\{{\mathcal {B}}f(e^{-x})\right\}(s)=\left\{{\mathcal {F}}f(e^{-x})\right\}(-is)\ .}$

The Mellin transform also connects the Newton series or binomial transform together with the Poisson generating function, by means of the Poisson–Mellin–Newton cycle.

The Mellin transform may also be viewed as the Gelfand transform for the convolution algebra of the locally compact abelian group of positive real numbers with multiplication.

## Examples

### Cahen–Mellin integral

The Mellin transform of the function ${\displaystyle f(x)=e^{-x}}$ is

${\displaystyle \Gamma (s)=\int _{0}^{\infty }x^{s-1}e^{-x}dx}$

where ${\displaystyle \Gamma (s)}$ is the gamma function. ${\displaystyle \Gamma (s)}$ is a meromorphic function with simple poles at ${\displaystyle z=0,-1,-2,\dots }$. [1] Therefore, ${\displaystyle \Gamma (s)}$ is analytic for ${\displaystyle \Re (s)>0}$. Thus, letting ${\displaystyle c>0}$ and ${\displaystyle z^{-s}}$ on the principal branch, the inverse transform gives

${\displaystyle e^{-z}={\frac {1}{2\pi i}}\int _{c-i\infty }^{c+i\infty }\Gamma (s)z^{-s}\;ds}$.

This integral is known as the Cahen–Mellin integral. [2]

### Polynomial Functions

Since ${\textstyle \int _{0}^{\infty }x^{a}dx}$ is not convergent for any value of ${\displaystyle a\in \mathbb {R} }$, the Mellin transform is not defined for polynomial functions defined on the whole positive real axis. However, by defining it to be zero on different sections of the real axis, it is possible to take the Mellin transform. For example, if

${\displaystyle f(x)={\begin{cases}x^{a}&x<1,\\0&x>1,\end{cases}}}$

then

${\displaystyle {\mathcal {M}}f(s)=\int _{0}^{1}x^{s-1}x^{a}dx=\int _{0}^{1}x^{s+a-1}dx={\frac {1}{s+a}}.}$

Thus ${\displaystyle {\mathcal {M}}f(s)}$ has a simple pole at ${\displaystyle s=-a}$ and is thus defined for ${\displaystyle \Re (s)>-a}$. Similarly, if

${\displaystyle f(x)={\begin{cases}0&x<1,\\x^{b}&x>1,\end{cases}}}$

then

${\displaystyle {\mathcal {M}}f(s)=\int _{1}^{\infty }x^{s-1}x^{b}dx=\int _{1}^{\infty }x^{s+b-1}dx=-{\frac {1}{s+b}}.}$

Thus ${\displaystyle {\mathcal {M}}f(s)}$ has a simple pole at ${\displaystyle s=-b}$ and is thus defined for ${\displaystyle \Re (s)<-b}$.

### Exponential Functions

For ${\displaystyle p>0}$, let ${\displaystyle f(x)=e^{-px}}$. Then

${\displaystyle {\mathcal {M}}f(s)=\int _{0}^{\infty }x^{s}e^{-px}{\frac {dx}{x}}=\int _{0}^{\infty }\left({\frac {u}{p}}\right)^{s}e^{-u}{\frac {du}{u}}={\frac {1}{p^{s}}}\int _{0}^{\infty }u^{s}e^{-u}{\frac {du}{u}}={\frac {1}{p^{s}}}\Gamma (s).}$

### Zeta Function

It is possible to use the Mellin transform to produce one of the fundamental formulas for the Riemann zeta function, ${\displaystyle \zeta (s)}$. Let ${\textstyle f(x)={\frac {1}{e^{x}-1}}}$. Then

${\displaystyle {\mathcal {M}}f(s)=\int _{0}^{\infty }x^{s-1}{\frac {1}{e^{x}-1}}dx=\int _{0}^{\infty }x^{s-1}{\frac {e^{-x}}{1-e^{-x}}}dx=\int _{0}^{\infty }x^{s-1}\sum _{n=1}^{\infty }e^{-nx}dx=\sum _{n=1}^{\infty }\int _{0}^{\infty }x^{s}e^{-nx}{\frac {dx}{x}}=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}\Gamma (s)=\Gamma (s)\zeta (s).}$

Thus,

${\displaystyle \zeta (s)={\frac {1}{\Gamma (s)}}\int _{0}^{\infty }x^{s-1}{\frac {1}{e^{x}-1}}dx.}$

### Generalized Gaussian

For ${\displaystyle p>0}$, let ${\displaystyle f(x)=e^{-x^{p}}}$ (i.e. ${\displaystyle f}$ is a generalized Gaussian distribution without the scaling factor.) Then

${\displaystyle {\mathcal {M}}f(s)=\int _{0}^{\infty }x^{s-1}e^{-x^{p}}dx=\int _{0}^{\infty }x^{p-1}x^{s-p}e^{-x^{p}}dx=\int _{0}^{\infty }x^{p-1}(x^{p})^{s/p-1}e^{-x^{p}}dx={\frac {1}{p}}\int _{0}^{\infty }u^{s/p-1}e^{-u}du={\frac {\Gamma (s/p)}{p}}.}$

In particular, setting ${\displaystyle s=1}$ recovers the following form of the gamma function

${\displaystyle \Gamma \left(1+{\frac {1}{p}}\right)=\int _{0}^{\infty }e^{-x^{p}}dx.}$

## Fundamental strip

For ${\displaystyle \alpha ,\beta \in \mathbb {R} }$, let the open strip ${\displaystyle \langle \alpha ,\beta \rangle }$ be defined to be all ${\displaystyle s\in \mathbb {C} }$ such that ${\displaystyle s=\sigma +it}$ with ${\displaystyle \alpha <\sigma <\beta .}$ The fundamental strip of ${\displaystyle {\mathcal {M}}f(s)}$ is defined to be the largest open strip on which it is defined. For example, for ${\displaystyle a>b}$ the fundamental strip of

${\displaystyle f(x)={\begin{cases}x^{a}&x<1,\\x^{b}&x>1,\end{cases}}}$

is ${\displaystyle \langle -a,-b\rangle .}$ As seen by this example, the asymptotics of the function as ${\displaystyle x\to 0^{+}}$ define the left endpoint of its fundamental strip, and the asymptotics of the function as ${\displaystyle x\to +\infty }$ define its right endpoint. To summarize using Big O notation, if ${\displaystyle f}$ is ${\displaystyle O(x^{a})}$ as ${\displaystyle x\to 0^{+}}$ and ${\displaystyle O(x^{b})}$ as ${\displaystyle x\to +\infty ,}$ then ${\displaystyle {\mathcal {M}}f(s)}$ is defined in the strip ${\displaystyle \langle -a,-b\rangle .}$ [3]

An application of this can be seen in the gamma function, ${\displaystyle \Gamma (s).}$ Since ${\displaystyle f(x)=e^{-x}}$ is ${\displaystyle O(0)}$ as ${\displaystyle x\to 0^{+}}$ and ${\displaystyle O(x^{k})}$ for all ${\displaystyle k,}$ then ${\displaystyle \Gamma (s)={\mathcal {M}}f(s)}$ should be defined in the strip ${\displaystyle \langle 0,+\infty \rangle ,}$ which confirms that ${\displaystyle \Gamma (s)}$ is analytic for ${\displaystyle \Re (s)>0.}$

## As an isometry on L2 spaces

In the study of Hilbert spaces, the Mellin transform is often posed in a slightly different way. For functions in ${\displaystyle L^{2}(0,\infty )}$ (see Lp space) the fundamental strip always includes ${\displaystyle {\tfrac {1}{2}}+i\mathbb {R} }$, so we may define a linear operator ${\displaystyle {\tilde {\mathcal {M}}}}$ as

${\displaystyle {\tilde {\mathcal {M}}}\colon L^{2}(0,\infty )\to L^{2}(-\infty ,\infty ),}$
${\displaystyle \{{\tilde {\mathcal {M}}}f\}(s):={\frac {1}{\sqrt {2\pi }}}\int _{0}^{\infty }x^{-{\frac {1}{2}}+is}f(x)\,dx.}$

In other words, we have set

${\displaystyle \{{\tilde {\mathcal {M}}}f\}(s):={\tfrac {1}{\sqrt {2\pi }}}\{{\mathcal {M}}f\}({\tfrac {1}{2}}+is).}$

This operator is usually denoted by just plain ${\displaystyle {\mathcal {M}}}$ and called the "Mellin transform", but ${\displaystyle {\tilde {\mathcal {M}}}}$ is used here to distinguish from the definition used elsewhere in this article. The Mellin inversion theorem then shows that ${\displaystyle {\tilde {\mathcal {M}}}}$ is invertible with inverse

${\displaystyle {\tilde {\mathcal {M}}}^{-1}\colon L^{2}(-\infty ,\infty )\to L^{2}(0,\infty ),}$
${\displaystyle \{{\tilde {\mathcal {M}}}^{-1}\varphi \}(x)={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }x^{-{\frac {1}{2}}-is}\varphi (s)\,ds.}$

Furthermore, this operator is an isometry, that is to say ${\displaystyle \|{\tilde {\mathcal {M}}}f\|_{L^{2}(-\infty ,\infty )}=\|f\|_{L^{2}(0,\infty )}}$ for all ${\displaystyle f\in L^{2}(0,\infty )}$ (this explains why the factor of ${\displaystyle 1/{\sqrt {2\pi }}}$ was used).

## In probability theory

In probability theory, the Mellin transform is an essential tool in studying the distributions of products of random variables. [4] If X is a random variable, and X+ = max{X,0} denotes its positive part, while X = max{−X,0} is its negative part, then the Mellin transform of X is defined as [5]

${\displaystyle {\mathcal {M}}_{X}(s)=\int _{0}^{\infty }x^{s}dF_{X^{+}}(x)+\gamma \int _{0}^{\infty }x^{s}dF_{X^{-}}(x),}$

where γ is a formal indeterminate with γ2 = 1. This transform exists for all s in some complex strip D = {s : a ≤ Re(s) ≤ b} , where a ≤ 0 ≤ b. [5]

The Mellin transform ${\displaystyle {\mathcal {M}}_{X}(it)}$ of a random variable X uniquely determines its distribution function FX. [5] The importance of the Mellin transform in probability theory lies in the fact that if X and Y are two independent random variables, then the Mellin transform of their products is equal to the product of the Mellin transforms of X and Y: [6]

${\displaystyle {\mathcal {M}}_{XY}(s)={\mathcal {M}}_{X}(s){\mathcal {M}}_{Y}(s)}$

## Problems with Laplacian in cylindrical coordinate system

In the Laplacian in cylindrical coordinates in a generic dimension (orthogonal coordinates with one angle and one radius, and the remaining lengths) there is always a term:

${\displaystyle {\frac {1}{r}}{\frac {\partial }{\partial r}}\left(r{\frac {\partial f}{\partial r}}\right)=f_{rr}+{\frac {f_{r}}{r}}}$

For example, in 2-D polar coordinates the Laplacian is:

${\displaystyle \nabla ^{2}f={\frac {1}{r}}{\frac {\partial }{\partial r}}\left(r{\frac {\partial f}{\partial r}}\right)+{\frac {1}{r^{2}}}{\frac {\partial ^{2}f}{\partial \theta ^{2}}}}$

and in 3-D cylindrical coordinates the Laplacian is,

${\displaystyle \nabla ^{2}f={\frac {1}{r}}{\frac {\partial }{\partial r}}\left(r{\frac {\partial f}{\partial r}}\right)+{\frac {1}{r^{2}}}{\frac {\partial ^{2}f}{\partial \varphi ^{2}}}+{\frac {\partial ^{2}f}{\partial z^{2}}}.}$

This term can be easily treated[ clarification needed ] with the Mellin transform, [7] since:

${\displaystyle {\mathcal {M}}\left(r^{2}f_{rr}+rf_{r},r\to s\right)=s^{2}{\mathcal {M}}\left(f,r\to s\right)=s^{2}F}$

For example, the 2-D Laplace equation in polar coordinates is the PDE in two variables:

${\displaystyle r^{2}f_{rr}+rf_{r}+f_{\theta \theta }=0}$

and by multiplication:

${\displaystyle {\frac {1}{r}}{\frac {\partial }{\partial r}}\left(r{\frac {\partial f}{\partial r}}\right)+{\frac {1}{r^{2}}}{\frac {\partial ^{2}f}{\partial \theta ^{2}}}=0}$

with a Mellin transform on radius becomes the simple harmonic oscillator:

${\displaystyle F_{\theta \theta }+s^{2}F=0}$

with general solution:

${\displaystyle F(s,\theta )=C_{1}(s)\cos(s\theta )+C_{2}(s)\sin(s\theta )}$

Now let's impose for example some simple wedge boundary conditions to the original Laplace equation:

${\displaystyle f(r,-\theta _{0})=a(r),\quad f(r,\theta _{0})=b(r)}$

these are particularly simple for Mellin transform, becoming:

${\displaystyle F(s,-\theta _{0})=A(s),\quad F(s,\theta _{0})=B(s)}$

These conditions imposed to the solution particularize it to:

${\displaystyle F(s,\theta )=A(s){\frac {\sin(s(\theta _{0}-\theta ))}{\sin(2\theta _{0}s)}}+B(s){\frac {\sin(s(\theta _{0}+\theta ))}{\sin(2\theta _{0}s)}}}$

Now by the convolution theorem for Mellin transform, the solution in the Mellin domain can be inverted:

${\displaystyle f(r,\theta )={\frac {r^{m}\cos(m\theta )}{2\theta _{0}}}\int _{0}^{\infty }\left({\frac {a(x)}{x^{2m}+2r^{m}x^{m}\sin(m\theta )+r^{2m}}}+{\frac {b(x)}{x^{2m}-2r^{m}x^{m}\sin(m\theta )+r^{2m}}}\right)x^{m-1}\,dx}$

where the following inverse transform relation was employed:

${\displaystyle {\mathcal {M}}^{-1}\left({\frac {\sin(s\varphi )}{\sin(2\theta _{0}s)}};s\to r\right)={\frac {1}{2\theta _{0}}}{\frac {r^{m}\sin(m\varphi )}{1+2r^{m}\cos(m\varphi )+r^{2m}}}}$

where ${\displaystyle m={\frac {\pi }{2\theta _{0}}}}$.

## Applications

The Mellin Transform is widely used in computer science for the analysis of algorithms[ clarification needed ] because of its scale invariance property. The magnitude of the Mellin Transform of a scaled function is identical to the magnitude of the original function for purely imaginary inputs. This scale invariance property is analogous to the Fourier Transform's shift invariance property. The magnitude of a Fourier transform of a time-shifted function is identical to the magnitude of the Fourier transform of the original function.

This property is useful in image recognition. An image of an object is easily scaled when the object is moved towards or away from the camera.

In quantum mechanics and especially quantum field theory, Fourier space is enormously useful and used extensively because momentum and position are Fourier transforms of each other (for instance, Feynman diagrams are much more easily computed in momentum space). In 2011, A. Liam Fitzpatrick, Jared Kaplan, João Penedones, Suvrat Raju, and Balt C. van Rees showed that Mellin space serves an analogous role in the context of the AdS/CFT correspondence. [8] [9] [10]

## Notes

1. Whittaker, E.T.; Watson, G.N. (1996). A Course of Modern Analysis . Cambridge University Press.
2. Hardy, G. H.; Littlewood, J. E. (1916). "Contributions to the Theory of the Riemann Zeta-Function and the Theory of the Distribution of Primes". Acta Mathematica . 41 (1): 119–196. doi:.(See notes therein for further references to Cahen's and Mellin's work, including Cahen's thesis.)
3. Flajolet, P.; Gourdon, X.; Dumas, P. (1995). "Mellin transforms and asymptotics: Harmonic sums" (PDF). Theoretical Computer Science. 144 (1–2): 3–58. doi:10.1016/0304-3975(95)00002-e.
4. Galambos & Simonelli (2004 , p. 15)
5. Galambos & Simonelli (2004 , p. 16)
6. Galambos & Simonelli (2004 , p. 23)
7. Bhimsen, Shivamoggi, Chapter 6: The Mellin Transform, par. 4.3: Distribution of a Potential in a Wedge, pp. 267–8
8. A. Liam Fitzpatrick, Jared Kaplan, Joao Penedones, Suvrat Raju, Balt C. van Rees. "A Natural Language for AdS/CFT Correlators".
9. A. Liam Fitzpatrick, Jared Kaplan. "Unitarity and the Holographic S-Matrix"
10. A. Liam Fitzpatrick. "AdS/CFT and the Holographic S-Matrix", video lecture.

## Related Research Articles

In mathematics, the Laplace transform, named after its inventor Pierre-Simon Laplace, is an integral transform that converts a function of a real variable to a function of a complex variable . The transform has many applications in science and engineering because it is a tool for solving differential equations. In particular, it transforms differential equations into algebraic equations and convolution into multiplication.

In complex analysis, a branch of mathematics, analytic continuation is a technique to extend the domain of definition of a given analytic function. Analytic continuation often succeeds in defining further values of a function, for example in a new region where an infinite series representation in terms of which it is initially defined becomes divergent.

In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivative. It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be thought of as an integral version of the product rule of differentiation.

In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the form

In mathematics, the beta function, also called the Euler integral of the first kind, is a special function that is closely related to the gamma function and to binomial coefficients. It is defined by the integral

In mathematics, the Clausen function, introduced by Thomas Clausen (1832), is a transcendental, special function of a single variable. It can variously be expressed in the form of a definite integral, a trigonometric series, and various other special functions. It is intimately connected with the polylogarithm, inverse tangent integral, polygamma function, Riemann zeta function, Dirichlet eta function, and Dirichlet beta function.

The Gaussian integral, also known as the Euler–Poisson integral, is the integral of the Gaussian function over the entire real line. Named after the German mathematician Carl Friedrich Gauss, the integral is

In mathematics, physics and engineering, the sinc function, denoted by sinc(x), has two slightly different definitions.

In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics. The Hamilton–Jacobi equation is particularly useful in identifying conserved quantities for mechanical systems, which may be possible even when the mechanical problem itself cannot be solved completely.

In mathematics, there are several integrals known as the Dirichlet integral, after the German mathematician Peter Gustav Lejeune Dirichlet, one of which is the improper integral of the sinc function over the positive real line:

The rigid rotor is a mechanical model of rotating systems. An arbitrary rigid rotor is a 3-dimensional rigid object, such as a top. To orient such an object in space requires three angles, known as Euler angles. A special rigid rotor is the linear rotor requiring only two angles to describe, for example of a diatomic molecule. More general molecules are 3-dimensional, such as water, ammonia, or methane. The rigid-rotor Schroedinger equation is discussed in Section 11.2 on pages 240-253 of the textbook by Bunker and Jensen.

In Bayesian probability, the Jeffreys prior, named after Sir Harold Jeffreys, is a non-informative (objective) prior distribution for a parameter space; its density function is proportional to the square root of the determinant of the Fisher information matrix:

The Wigner D-matrix is a unitary matrix in an irreducible representation of the groups SU(2) and SO(3). The complex conjugate of the D-matrix is an eigenfunction of the Hamiltonian of spherical and symmetric rigid rotors. The matrix was introduced in 1927 by Eugene Wigner. D stands for Darstellung, which means "representation" in German.

A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.

In mathematics, Maass forms or Maass wave forms are studied in the theory of automorphic forms. Maass forms are complex-valued smooth functions of the upper half plane, which transform in a similar way under the operation of a discrete subgroup of as modular forms. They are Eigenforms of the hyperbolic Laplace Operator defined on and satisfy certain growth conditions at the cusps of a fundamental domain of . In contrast to the modular forms the Maass forms need not be holomorphic. They were studied first by Hans Maass in 1949.

Bending of plates, or plate bending, refers to the deflection of a plate perpendicular to the plane of the plate under the action of external forces and moments. The amount of deflection can be determined by solving the differential equations of an appropriate plate theory. The stresses in the plate can be calculated from these deflections. Once the stresses are known, failure theories can be used to determine whether a plate will fail under a given load.

A product distribution is a probability distribution constructed as the distribution of the product of random variables having two other known distributions. Given two statistically independent random variables X and Y, the distribution of the random variable Z that is formed as the product

In mathematics, Ramanujan's master theorem is a technique that provides an analytic expression for the Mellin transform of an analytic function.

In probability theory, the stable count distribution is the conjugate prior of a one-sided stable distribution. This distribution was discovered by Stephen Lihn in his 2017 study of daily distributions of S&P 500 index and the VIX index. The stable distribution family is also sometimes referred to as the Lévy alpha-stable distribution, after Paul Lévy, the first mathematician to have studied it.