Constant-function market makers (CFMM) are a paradigm in the design of trading venues where a trading function and a set of rules determine how liquidity takers (LTs) and liquidity providers (LPs) interact, and how markets are cleared. The trading function is deterministic and known to all market participants.
CFMMs display pools of liquidity of two assets. The takers and providers of liquidity interact in the liquidity pools: LPs deposit their assets in the pool and LTs exchange assets directly with the pool. CFMMs rely on two rules; the LT trading condition and the LP provision condition. [1] The LT trading condition links the state of the pool before and after a trade is executed, and it determines the relative prices between the assets by their quantities in the pool. [2] The LP provision condition links the state of the pool before and after liquidity is deposited or withdrawn by an LP. Thus, the trading function establishes the link between liquidity and prices, so LTs can compute the execution costs of their trades as a function of the trade size, and LPs can compute the exact quantities that they deposit. In CFMMs, both conditions state that price formation happens only through LT trades (see below).
In decentralized platforms running on peer-to-peer networks, CFMMs are hard-coded and immutable programs implemented as Smart Contracts, [3] where LPs and LTs invoke the code of the contract to execute their transactions. A particular case of CFMMs are the constant product market makers (CPMMs) such as Uniswap v2 and Uniswap v3 where the trading function uses the product of the quantities of each asset in the pool to determine clearing prices. CFMMs are also popular in prediction markets. [4]
Consider a reference asset and an asset which is valued in terms of . Assume that the liquidity pool of the CFMM initially consists of quantity of asset and quantity of asset . The pair is referred to as the reserves of the pool (the following definitions can be extended to a basket of more than two assets [5] ). The CFM is characterised by a trading function (also known as the invariant) defined over the pool reserves and . The trading function is continuously differentiable and increasing in its arguments ( denotes the set of positive real numbers).
For instance, the trading function of the constant product market maker (CPMM) is . Other types of CFMMs are the constant sum market maker with ; the constant mean market maker with , where and ; and the hybrid function market maker, which uses combinations of trading functions.
LT transactions involve exchanging a quantity of asset for a quantity of asset , and vice-versa. The quantities to exchange are determined by the LT trading condition:
where is the depth of the pool [2] (see the LP provision condition below) and is a measure of the available liquidity. The value of the depth is constant before and after a trade is executed, so the LT trading condition (1) defines a level curve. For a fixed value of the depth , the level function (also known as the forward exchange function [5] ) is such that . For any value of the depth, the level function is twice differentiable. [6]
The LT trading condition (1) links the state of the pool before and after a liquidity taking trade is executed. For LTs, this condition specifies the exchange rate , of asset in terms of the reference asset , to trade a (possibly negative) quantity of asset :
The marginal exchange rate of asset in terms of asset , akin to the midprice in a limit order book (LOB), is the price for an infinitesimal trade in a CFMM:
It is proven that no roundtrip arbitrage in a CFMM implies that the level function must be convex. [1]
Execution costs in the CFMM are defined as the difference between the marginal exchange rate and the exchange rate at which a trade is executed. It has been shown that LTs can use the convexity of the level function around the pool's reserves level to approximate the execution costs by . [2]
LP transactions involve depositing or withdrawing quantities of asset and asset . Let be the initial depth of the pool and let be the depth of the pool after an LP deposits , i.e., and . Let and be the level functions corresponding to the values and , respectively. Denote by the initial marginal exchange rate of the pool. The LP provision condition requires that LPs do not change the marginal rate , so
The LP provision condition (2) links the state of the pool before and after a liquidity provision operation is executed. The trading function is increasing in the pool reserves and So, when liquidity provision activity increases (decreases) the size of the pool, the value of the pool's depth increases (decreases). The value of can be seen as a measure of the liquidity depth in the pool. Note that the LP provision condition holds for any homothetic trading function.
In CPMMs such as Uniswap v2, the trading function is so the level function is , the marginal exchange rate is and the exchange rate for a quantity is
In CPMMs, the liquidity provision condition is when the quantities are deposited to the pool. Thus, liquidity is provided so that the proportion of the reserves and in the pool is preserved. [2]
For LPs, the key difference between the traditional markets based on LOBs and CFMMs is that in LOBs, market makers post limit orders above and below the mid-price to earn the spread on roundtrip trades, while in CFMMs, LPs earn fees paid by LTs when their liquidity is used.
Without fees paid by LTs, liquidity provision in CFMMs is a loss-leading activity. Loss-Versus-Rebalancing (LVR) is a popular measure of these losses. [7] Assume the price follows the dynamics then the LVR is given by
To thoroughly characterise their losses, LPs can also use Predictable Loss (PL), which is a comprehensive and model-free measure for the unhedgeable and predictable losses of liquidity provision. [1] One source of PL is the convexity cost (losses due to adverse selection, they can be regarded as generalized LVR) whose magnitude depends on liquidity taking activity and the convexity of the level function. The other source is the opportunity cost, which is incurred by LPs who lock assets in the pool instead of investing them in the risk-free asset. For an LP providing reserves at time and withdrawing liquidity at time , PL is
where is an increasing stochastic process with initial value , and is a process that describes the reserves in asset . In particular, satisfies
PL can be estimated without specifying dynamics for the marginal rate or the trading flow [8] and without specifying a parametric form for the level function. PL shows that liquidity provision generates losses for any type of LT trading activity (informed and uninformed). The level of fee revenue must exceed PL in expectation for liquidity provision to be profitable in CFMMs.
Impermanent loss, or divergence loss, is sometimes used to characterise the risk of providing liquidity in a CFMM. [9] Impermanent loss compares the evolution of the value of the LP's assets in the pool with the evolution of a self-financing buy-and-hold portfolio invested in an alternative venue. The self-financing portfolio is initiated with the same quantities as those that the LP deposits in the pool. It can be shown that the impermanent loss at time is
where are the reserves in asset in the pool at time .
The convexity of the level function shows that . In the case of CPMMs, the impermanent loss is given by
where is the marginal exchange rate in the CPMM pool at time .
is not an appropriate measure to characterise the losses of LPs because it can underestimate or overestimate the losses that are solely imputable to liquidity provision. More precisely, the alternative buy-and-hold portfolio is not exposed to the same market risk as the holdings of the LP in the pool, and the impermanent loss can be partly hedged. In contrast, PL is the predictable and unhedgeable component in the wealth of LPs. [1]
Concentrated liquidity is a feature introduced by Uniswap v3 for CPMMs. The key feature of a CPMM pool with CL is that LPs specify a range of exchange rates in which to post liquidity. The bounds of the liquidity range take values in a discretised finite set of exchange rates called ticks. Concentrating liquidity increases fee revenue, but also increases PL and concentration risk, i.e., the risk of the exchange rate exiting the range. [10]
An early description of a CFMM was published by economist Robin Hanson in "Logarithmic Market Scoring Rules for Modular Combinatorial Information Aggregation" (2002). [11] Early literature referred to the broader class of "automated market makers", including that of the Hollywood Stock Exchange founded in 1999; the term "constant-function market maker" was introduced in "Improved Price Oracles: Constant Function Market Makers" (Angeris & Chitra 2020). [12] First be seen in production on a Minecraft server in 2012, [13] CFMMs are a popular DEX architecture.
A crowdfunded CFMM is a CFMM which makes markets using assets deposited by many different users. Users may contribute their assets to the CFMM's inventory, and receive in exchange a pro rata share of the inventory, claimable at any point for the assets in the inventory at that time the claim is made. [3]
In mathematical analysis, the Dirac delta function, also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Thus it can be represented heuristically as
In fluid dynamics, potential flow or irrotational flow refers to a description of a fluid flow with no vorticity in it. Such a description typically arises in the limit of vanishing viscosity, i.e., for an inviscid fluid and with no vorticity present in the flow.
In mathematics, the Kronecker delta is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: or with use of Iverson brackets: For example, because , whereas because .
In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906, building on an earlier proof of the same inequality for doubly-differentiable functions by Otto Hölder in 1889. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation.
In the calculus of variations, a field of mathematical analysis, the functional derivative relates a change in a functional to a change in a function on which the functional depends.
In geodesy, conversion among different geographic coordinate systems is made necessary by the different geographic coordinate systems in use across the world and over time. Coordinate conversion is composed of a number of different types of conversion: format change of geographic coordinates, conversion of coordinate systems, or transformation to different geodetic datums. Geographic coordinate conversion has applications in cartography, surveying, navigation and geographic information systems.
In mathematics, the Carlson symmetric forms of elliptic integrals are a small canonical set of elliptic integrals to which all others may be reduced. They are a modern alternative to the Legendre forms. The Legendre forms may be expressed in terms of the Carlson forms and vice versa.
The Schwinger–Dyson equations (SDEs) or Dyson–Schwinger equations, named after Julian Schwinger and Freeman Dyson, are general relations between correlation functions in quantum field theories (QFTs). They are also referred to as the Euler–Lagrange equations of quantum field theories, since they are the equations of motion corresponding to the Green's function. They form a set of infinitely many functional differential equations, all coupled to each other, sometimes referred to as the infinite tower of SDEs.
In theoretical physics, the (one-dimensional) nonlinear Schrödinger equation (NLSE) is a nonlinear variation of the Schrödinger equation. It is a classical field equation whose principal applications are to the propagation of light in nonlinear optical fibers and planar waveguides and to Bose–Einstein condensates confined to highly anisotropic, cigar-shaped traps, in the mean-field regime. Additionally, the equation appears in the studies of small-amplitude gravity waves on the surface of deep inviscid (zero-viscosity) water; the Langmuir waves in hot plasmas; the propagation of plane-diffracted wave beams in the focusing regions of the ionosphere; the propagation of Davydov's alpha-helix solitons, which are responsible for energy transport along molecular chains; and many others. More generally, the NLSE appears as one of universal equations that describe the evolution of slowly varying packets of quasi-monochromatic waves in weakly nonlinear media that have dispersion. Unlike the linear Schrödinger equation, the NLSE never describes the time evolution of a quantum state. The 1D NLSE is an example of an integrable model.
In calculus, the Leibniz integral rule for differentiation under the integral sign, named after Gottfried Wilhelm Leibniz, states that for an integral of the form where and the integrands are functions dependent on the derivative of this integral is expressible as where the partial derivative indicates that inside the integral, only the variation of with is considered in taking the derivative.
In mathematics, the biharmonic equation is a fourth-order partial differential equation which arises in areas of continuum mechanics, including linear elasticity theory and the solution of Stokes flows. Specifically, it is used in the modeling of thin structures that react elastically to external forces.
In mathematics, subharmonic and superharmonic functions are important classes of functions used extensively in partial differential equations, complex analysis and potential theory.
In theoretical physics, a source is an abstract concept, developed by Julian Schwinger, motivated by the physical effects of surrounding particles involved in creating or destroying another particle. So, one can perceive sources as the origin of the physical properties carried by the created or destroyed particle, and thus one can use this concept to study all quantum processes including the spacetime localized properties and the energy forms, i.e., mass and momentum, of the phenomena. The probability amplitude of the created or the decaying particle is defined by the effect of the source on a localized spacetime region such that the affected particle captures its physics depending on the tensorial and spinorial nature of the source. An example that Julian Schwinger referred to is the creation of meson due to the mass correlations among five mesons.
The Timoshenko–Ehrenfest beam theory was developed by Stephen Timoshenko and Paul Ehrenfest early in the 20th century. The model takes into account shear deformation and rotational bending effects, making it suitable for describing the behaviour of thick beams, sandwich composite beams, or beams subject to high-frequency excitation when the wavelength approaches the thickness of the beam. The resulting equation is of 4th order but, unlike Euler–Bernoulli beam theory, there is also a second-order partial derivative present. Physically, taking into account the added mechanisms of deformation effectively lowers the stiffness of the beam, while the result is a larger deflection under a static load and lower predicted eigenfrequencies for a given set of boundary conditions. The latter effect is more noticeable for higher frequencies as the wavelength becomes shorter, and thus the distance between opposing shear forces decreases.
In physics, f(R) is a type of modified gravity theory which generalizes Einstein's general relativity. f(R) gravity is actually a family of theories, each one defined by a different function, f, of the Ricci scalar, R. The simplest case is just the function being equal to the scalar; this is general relativity. As a consequence of introducing an arbitrary function, there may be freedom to explain the accelerated expansion and structure formation of the Universe without adding unknown forms of dark energy or dark matter. Some functional forms may be inspired by corrections arising from a quantum theory of gravity. f(R) gravity was first proposed in 1970 by Hans Adolph Buchdahl (although ϕ was used rather than f for the name of the arbitrary function). It has become an active field of research following work by Alexei Starobinsky on cosmic inflation. A wide range of phenomena can be produced from this theory by adopting different functions; however, many functional forms can now be ruled out on observational grounds, or because of pathological theoretical problems.
In fluid dynamics, the mild-slope equation describes the combined effects of diffraction and refraction for water waves propagating over bathymetry and due to lateral boundaries—like breakwaters and coastlines. It is an approximate model, deriving its name from being originally developed for wave propagation over mild slopes of the sea floor. The mild-slope equation is often used in coastal engineering to compute the wave-field changes near harbours and coasts.
In mathematics, the Neumann–Poincaré operator or Poincaré–Neumann operator, named after Carl Neumann and Henri Poincaré, is a non-self-adjoint compact operator introduced by Poincaré to solve boundary value problems for the Laplacian on bounded domains in Euclidean space. Within the language of potential theory it reduces the partial differential equation to an integral equation on the boundary to which the theory of Fredholm operators can be applied. The theory is particularly simple in two dimensions—the case treated in detail in this article—where it is related to complex function theory, the conjugate Beurling transform or complex Hilbert transform and the Fredholm eigenvalues of bounded planar domains.
In mathematics, Zolotarev polynomials are polynomials used in approximation theory. They are sometimes used as an alternative to the Chebyshev polynomials where accuracy of approximation near the origin is of less importance. Zolotarev polynomials differ from the Chebyshev polynomials in that two of the coefficients are fixed in advance rather than allowed to take on any value. The Chebyshev polynomials of the first kind are a special case of Zolotarev polynomials. These polynomials were introduced by Russian mathematician Yegor Ivanovich Zolotarev in 1868.
In mathematics, Rathjen's psi function is an ordinal collapsing function developed by Michael Rathjen. It collapses weakly Mahlo cardinals to generate large countable ordinals. A weakly Mahlo cardinal is a cardinal such that the set of regular cardinals below is closed under . Rathjen uses this to diagonalise over the weakly inaccessible hierarchy.
{{cite journal}}
: Cite journal requires |journal=
(help)