Capacity of a set

Last updated

In mathematics, the capacity of a set in Euclidean space is a measure of the "size" of that set. Unlike, say, Lebesgue measure, which measures a set's volume or physical extent, capacity is a mathematical analogue of a set's ability to hold electrical charge. More precisely, it is the capacitance of the set: the total charge a set can hold while maintaining a given potential energy. The potential energy is computed with respect to an idealized ground at infinity for the harmonic or Newtonian capacity, and with respect to a surface for the condenser capacity.

Contents

Historical note

The notion of capacity of a set and of "capacitable" set was introduced by Gustave Choquet in 1950: for a detailed account, see reference ( Choquet 1986 ).

Definitions

Condenser capacity

Let Σ be a closed, smooth, (n  1)-dimensional hypersurface in n-dimensional Euclidean space , n ≥ 3; K will denote the n-dimensional compact (i.e., closed and bounded) set of which Σ is the boundary. Let S be another (n 1)-dimensional hypersurface that encloses Σ: in reference to its origins in electromagnetism, the pair (Σ, S) is known as a condenser. The condenser capacity of Σ relative to S, denoted C(Σ, S) or cap(Σ, S), is given by the surface integral

where:

is the normal derivative of u across S; and

C(Σ, S) can be equivalently defined by the volume integral

The condenser capacity also has a variational characterization: C(Σ, S) is the infimum of the Dirichlet's energy functional

over all continuously differentiable functions v on D with v(x) = 1 on Σ and v(x) = 0 on S.

Harmonic capacity

Heuristically, the harmonic capacity of K, the region bounded by Σ, can be found by taking the condenser capacity of Σ with respect to infinity. More precisely, let u be the harmonic function in the complement of K satisfying u = 1 on Σ and u(x)  0 as x  ∞. Thus u is the Newtonian potential of the simple layer Σ. Then the harmonic capacity or Newtonian capacity of K, denoted C(K) or cap(K), is then defined by

If S is a rectifiable hypersurface completely enclosing K, then the harmonic capacity can be equivalently rewritten as the integral over S of the outward normal derivative of u:

The harmonic capacity can also be understood as a limit of the condenser capacity. To wit, let Sr denote the sphere of radius r about the origin in . Since K is bounded, for sufficiently large r, Sr will enclose K and (Σ, Sr) will form a condenser pair. The harmonic capacity is then the limit as r tends to infinity:

The harmonic capacity is a mathematically abstract version of the electrostatic capacity of the conductor K and is always non-negative and finite: 0  C(K) < +∞.

The Wiener capacity or Robin constantW(K) of K is given by

Logarithmic capacity

In two dimensions, the capacity is defined as above, but dropping the factor of in the definition:

This is often called the logarithmic capacity, the term logarithmic arises, as the potential function goes from being an inverse power to a logarithm in the limit. This is articulated below. It may also be called the conformal capacity, in reference to its relation to the conformal radius.

Properties

The harmonic function u is called the capacity potential, the Newtonian potential when and the logarithmic potential when . It can be obtained via a Green's function as

with x a point exterior to S, and

when and

for .

The measure is called the capacitary measure or equilibrium measure. It is generally taken to be a Borel measure. It is related to the capacity as

The variational definition of capacity over the Dirichlet energy can be re-expressed as

with the infimum taken over all positive Borel measures concentrated on K, normalized so that and with is the energy integral

Generalizations

The characterization of the capacity of a set as the minimum of an energy functional achieving particular boundary values, given above, can be extended to other energy functionals in the calculus of variations.

Divergence form elliptic operators

Solutions to a uniformly elliptic partial differential equation with divergence form

are minimizers of the associated energy functional

subject to appropriate boundary conditions.

The capacity of a set E with respect to a domain D containing E is defined as the infimum of the energy over all continuously differentiable functions v on D with v(x) = 1 on E; and v(x) = 0 on the boundary of D.

The minimum energy is achieved by a function known as the capacitary potential of E with respect to D, and it solves the obstacle problem on D with the obstacle function provided by the indicator function of E. The capacitary potential is alternately characterized as the unique solution of the equation with the appropriate boundary conditions.

See also

Related Research Articles

In particle physics, the Dirac equation is a relativistic wave equation derived by British physicist Paul Dirac in 1928. In its free form, or including electromagnetic interactions, it describes all spin-1/2 massive particles, called "Dirac particles", such as electrons and quarks for which parity is a symmetry. It is consistent with both the principles of quantum mechanics and the theory of special relativity, and was the first theory to account fully for special relativity in the context of quantum mechanics. It was validated by accounting for the fine structure of the hydrogen spectrum in a completely rigorous way. It has become vital in the building of the Standard Model.

<span class="mw-page-title-main">Navier–Stokes equations</span> Equations describing the motion of viscous fluid substances

The Navier–Stokes equations are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).

In physics, a Langevin equation is a stochastic differential equation describing how a system evolves when subjected to a combination of deterministic and fluctuating ("random") forces. The dependent variables in a Langevin equation typically are collective (macroscopic) variables changing only slowly in comparison to the other (microscopic) variables of the system. The fast (microscopic) variables are responsible for the stochastic nature of the Langevin equation. One application is to Brownian motion, which models the fluctuating motion of a small particle in a fluid.

In mathematical analysis, Hölder's inequality, named after Otto Hölder, is a fundamental inequality between integrals and an indispensable tool for the study of Lp spaces.

<span class="mw-page-title-main">Hooke's law</span> Physical law: force needed to deform a spring scales linearly with distance

In physics, Hooke's law is an empirical law which states that the force needed to extend or compress a spring by some distance scales linearly with respect to that distance—that is, Fs = kx, where k is a constant factor characteristic of the spring, and x is small compared to the total possible deformation of the spring. The law is named after 17th-century British physicist Robert Hooke. He first stated the law in 1676 as a Latin anagram. He published the solution of his anagram in 1678 as: ut tensio, sic vis. Hooke states in the 1678 work that he was aware of the law since 1660.

The Ising model, named after the physicists Ernst Ising and Wilhelm Lenz, is a mathematical model of ferromagnetism in statistical mechanics. The model consists of discrete variables that represent magnetic dipole moments of atomic "spins" that can be in one of two states. The spins are arranged in a graph, usually a lattice, allowing each spin to interact with its neighbors. Neighboring spins that agree have a lower energy than those that disagree; the system tends to the lowest energy but heat disturbs this tendency, thus creating the possibility of different structural phases. The model allows the identification of phase transitions as a simplified model of reality. The two-dimensional square-lattice Ising model is one of the simplest statistical models to show a phase transition.

In physics and mathematics, the Helmholtz decomposition theorem or the fundamental theorem of vector calculus states that certain differentiable vector fields can be resolved into the sum of an irrotational (curl-free) vector field and a solenoidal (divergence-free) vector field. In physics, often only the decomposition of sufficiently smooth, rapidly decaying vector fields in three dimensions is discussed. It is named after Hermann von Helmholtz.

The Einstein–Hilbert action in general relativity is the action that yields the Einstein field equations through the stationary-action principle. With the (− + + +) metric signature, the gravitational part of the action is given as

In mathematics, a Killing vector field, named after Wilhelm Killing, is a vector field on a Riemannian manifold that preserves the metric. Killing fields are the infinitesimal generators of isometries; that is, flows generated by Killing fields are continuous isometries of the manifold. More simply, the flow generates a symmetry, in the sense that moving each point of an object the same distance in the direction of the Killing vector will not distort distances on the object.

<span class="mw-page-title-main">Covariant formulation of classical electromagnetism</span> Ways of writing certain laws of physics

The covariant formulation of classical electromagnetism refers to ways of writing the laws of classical electromagnetism in a form that is manifestly invariant under Lorentz transformations, in the formalism of special relativity using rectilinear inertial coordinate systems. These expressions both make it simple to prove that the laws of classical electromagnetism take the same form in any inertial coordinate system, and also provide a way to translate the fields and forces from one frame to another. However, this is not as general as Maxwell's equations in curved spacetime or non-rectilinear coordinate systems.

<span class="mw-page-title-main">Maxwell's equations in curved spacetime</span> Electromagnetism in general relativity

In physics, Maxwell's equations in curved spacetime govern the dynamics of the electromagnetic field in curved spacetime or where one uses an arbitrary coordinate system. These equations can be viewed as a generalization of the vacuum Maxwell's equations which are normally formulated in the local coordinates of flat spacetime. But because general relativity dictates that the presence of electromagnetic fields induce curvature in spacetime, Maxwell's equations in flat spacetime should be viewed as a convenient approximation.

<span class="mw-page-title-main">Toroidal coordinates</span>

Toroidal coordinates are a three-dimensional orthogonal coordinate system that results from rotating the two-dimensional bipolar coordinate system about the axis that separates its two foci. Thus, the two foci and in bipolar coordinates become a ring of radius in the plane of the toroidal coordinate system; the -axis is the axis of rotation. The focal ring is also known as the reference circle.

Ellipsoidal coordinates are a three-dimensional orthogonal coordinate system that generalizes the two-dimensional elliptic coordinate system. Unlike most three-dimensional orthogonal coordinate systems that feature quadratic coordinate surfaces, the ellipsoidal coordinate system is based on confocal quadrics.

In 3-dimensional topology, a part of the mathematical field of geometric topology, the Casson invariant is an integer-valued invariant of oriented integral homology 3-spheres, introduced by Andrew Casson.

<span class="mw-page-title-main">Mathematical descriptions of the electromagnetic field</span> Formulations of electromagnetism

There are various mathematical descriptions of the electromagnetic field that are used in the study of electromagnetism, one of the four fundamental interactions of nature. In this article, several approaches are discussed, although the equations are in terms of electric and magnetic fields, potentials, and charges with currents, generally speaking.

Financial models with long-tailed distributions and volatility clustering have been introduced to overcome problems with the realism of classical financial models. These classical models of financial time series typically assume homoskedasticity and normality and as such cannot explain stylized phenomena such as skewness, heavy tails, and volatility clustering of the empirical asset returns in finance. In 1963, Benoit Mandelbrot first used the stable distribution to model the empirical distributions which have the skewness and heavy-tail property. Since -stable distributions have infinite -th moments for all , the tempered stable processes have been proposed for overcoming this limitation of the stable distribution.

Chapman–Enskog theory provides a framework in which equations of hydrodynamics for a gas can be derived from the Boltzmann equation. The technique justifies the otherwise phenomenological constitutive relations appearing in hydrodynamical descriptions such as the Navier–Stokes equations. In doing so, expressions for various transport coefficients such as thermal conductivity and viscosity are obtained in terms of molecular parameters. Thus, Chapman–Enskog theory constitutes an important step in the passage from a microscopic, particle-based description to a continuum hydrodynamical one.

In mathematical physics, the Belinfante–Rosenfeld tensor is a modification of the stress–energy tensor that is constructed from the canonical stress–energy tensor and the spin current so as to be symmetric yet still conserved.

<span class="mw-page-title-main">Relativistic Lagrangian mechanics</span> Mathematical formulation of special and general relativity

In theoretical physics, relativistic Lagrangian mechanics is Lagrangian mechanics applied in the context of special relativity and general relativity.

Lagrangian field theory is a formalism in classical field theory. It is the field-theoretic analogue of Lagrangian mechanics. Lagrangian mechanics is used to analyze the motion of a system of discrete particles each with a finite number of degrees of freedom. Lagrangian field theory applies to continua and fields, which have an infinite number of degrees of freedom.

References