In mathematics, the capacity of a set in Euclidean space is a measure of the "size" of that set. Unlike, say, Lebesgue measure, which measures a set's volume or physical extent, capacity is a mathematical analogue of a set's ability to hold electrical charge. More precisely, it is the capacitance of the set: the total charge a set can hold while maintaining a given potential energy. The potential energy is computed with respect to an idealized ground at infinity for the harmonic or Newtonian capacity, and with respect to a surface for the condenser capacity.
The notion of capacity of a set and of "capacitable" set was introduced by Gustave Choquet in 1950: for a detailed account, see reference ( Choquet 1986 ).
Let Σ be a closed, smooth, (n − 1)-dimensional hypersurface in n-dimensional Euclidean space , n ≥ 3; K will denote the n-dimensional compact (i.e., closed and bounded) set of which Σ is the boundary. Let S be another (n− 1)-dimensional hypersurface that encloses Σ: in reference to its origins in electromagnetism, the pair (Σ, S) is known as a condenser. The condenser capacity of Σ relative to S, denoted C(Σ, S) or cap(Σ, S), is given by the surface integral
where:
C(Σ, S) can be equivalently defined by the volume integral
The condenser capacity also has a variational characterization: C(Σ, S) is the infimum of the Dirichlet's energy functional
over all continuously differentiable functions v on D with v(x) = 1 on Σ and v(x) = 0 on S.
Heuristically, the harmonic capacity of K, the region bounded by Σ, can be found by taking the condenser capacity of Σ with respect to infinity. More precisely, let u be the harmonic function in the complement of K satisfying u = 1 on Σ and u(x) → 0 as x → ∞. Thus u is the Newtonian potential of the simple layer Σ. Then the harmonic capacity or Newtonian capacity of K, denoted C(K) or cap(K), is then defined by
If S is a rectifiable hypersurface completely enclosing K, then the harmonic capacity can be equivalently rewritten as the integral over S of the outward normal derivative of u:
The harmonic capacity can also be understood as a limit of the condenser capacity. To wit, let Sr denote the sphere of radius r about the origin in . Since K is bounded, for sufficiently large r, Sr will enclose K and (Σ, Sr) will form a condenser pair. The harmonic capacity is then the limit as r tends to infinity:
The harmonic capacity is a mathematically abstract version of the electrostatic capacity of the conductor K and is always non-negative and finite: 0 ≤ C(K) < +∞.
The Wiener capacity or Robin constantW(K) of K is given by
In two dimensions, the capacity is defined as above, but dropping the factor of in the definition:
This is often called the logarithmic capacity, the term logarithmic arises, as the potential function goes from being an inverse power to a logarithm in the limit. This is articulated below. It may also be called the conformal capacity, in reference to its relation to the conformal radius.
The harmonic function u is called the capacity potential, the Newtonian potential when and the logarithmic potential when . It can be obtained via a Green's function as
with x a point exterior to S, and
when and
for .
The measure is called the capacitary measure or equilibrium measure. It is generally taken to be a Borel measure. It is related to the capacity as
The variational definition of capacity over the Dirichlet energy can be re-expressed as
with the infimum taken over all positive Borel measures concentrated on K, normalized so that and with is the energy integral
The characterization of the capacity of a set as the minimum of an energy functional achieving particular boundary values, given above, can be extended to other energy functionals in the calculus of variations.
Solutions to a uniformly elliptic partial differential equation with divergence form
are minimizers of the associated energy functional
subject to appropriate boundary conditions.
The capacity of a set E with respect to a domain D containing E is defined as the infimum of the energy over all continuously differentiable functions v on D with v(x) = 1 on E; and v(x) = 0 on the boundary of D.
The minimum energy is achieved by a function known as the capacitary potential of E with respect to D, and it solves the obstacle problem on D with the obstacle function provided by the indicator function of E. The capacitary potential is alternately characterized as the unique solution of the equation with the appropriate boundary conditions.
In physics, specifically in electromagnetism, the Lorentz force is the combination of electric and magnetic force on a point charge due to electromagnetic fields. A particle of charge q moving with a velocity v in an electric field E and a magnetic field B experiences a force of
In vector calculus and differential geometry the generalized Stokes theorem, also called the Stokes–Cartan theorem, is a statement about the integration of differential forms on manifolds, which both simplifies and generalizes several theorems from vector calculus. In particular, the fundamental theorem of calculus is the special case where the manifold is a line segment, Green’s theorem and Stokes' theorem are the cases of a surface in or and the divergence theorem is the case of a volume in Hence, the theorem is sometimes referred to as the Fundamental Theorem of Multivariate Calculus.
The Navier–Stokes equations are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).
In physics, a Langevin equation is a stochastic differential equation describing how a system evolves when subjected to a combination of deterministic and fluctuating ("random") forces. The dependent variables in a Langevin equation typically are collective (macroscopic) variables changing only slowly in comparison to the other (microscopic) variables of the system. The fast (microscopic) variables are responsible for the stochastic nature of the Langevin equation. One application is to Brownian motion, which models the fluctuating motion of a small particle in a fluid.
In statistical mechanics and information theory, the Fokker–Planck equation is a partial differential equation that describes the time evolution of the probability density function of the velocity of a particle under the influence of drag forces and random forces, as in Brownian motion. The equation can be generalized to other observables as well. The Fokker-Planck equation has multiple applications in information theory, graph theory, data science, finance, economics etc.
The calculus of variations is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals: mappings from a set of functions to the real numbers. Functionals are often expressed as definite integrals involving functions and their derivatives. Functions that maximize or minimize functionals may be found using the Euler–Lagrange equation of the calculus of variations.
In physics, Hooke's law is an empirical law which states that the force needed to extend or compress a spring by some distance scales linearly with respect to that distance—that is, Fs = kx, where k is a constant factor characteristic of the spring, and x is small compared to the total possible deformation of the spring. The law is named after 17th-century British physicist Robert Hooke. He first stated the law in 1676 as a Latin anagram. He published the solution of his anagram in 1678 as: ut tensio, sic vis. Hooke states in the 1678 work that he was aware of the law since 1660.
A Newtonian fluid is a fluid in which the viscous stresses arising from its flow are at every point linearly correlated to the local strain rate — the rate of change of its deformation over time. Stresses are proportional to the rate of change of the fluid's velocity vector.
Stellar dynamics is the branch of astrophysics which describes in a statistical way the collective motions of stars subject to their mutual gravity. The essential difference from celestial mechanics is that the number of body
In physics and mathematics, the Helmholtz decomposition theorem or the fundamental theorem of vector calculus states that any sufficiently smooth, rapidly decaying vector field in three dimensions can be resolved into the sum of an irrotational (curl-free) vector field and a solenoidal (divergence-free) vector field. This is named after Hermann von Helmholtz.
In electromagnetism, charge density is the amount of electric charge per unit length, surface area, or volume. Volume charge density is the quantity of charge per unit volume, measured in the SI system in coulombs per cubic meter (C⋅m−3), at any point in a volume. Surface charge density (σ) is the quantity of charge per unit area, measured in coulombs per square meter (C⋅m−2), at any point on a surface charge distribution on a two dimensional surface. Linear charge density (λ) is the quantity of charge per unit length, measured in coulombs per meter (C⋅m−1), at any point on a line charge distribution. Charge density can be either positive or negative, since electric charge can be either positive or negative.
Toroidal coordinates are a three-dimensional orthogonal coordinate system that results from rotating the two-dimensional bipolar coordinate system about the axis that separates its two foci. Thus, the two foci and in bipolar coordinates become a ring of radius in the plane of the toroidal coordinate system; the -axis is the axis of rotation. The focal ring is also known as the reference circle.
There are various mathematical descriptions of the electromagnetic field that are used in the study of electromagnetism, one of the four fundamental interactions of nature. In this article, several approaches are discussed, although the equations are in terms of electric and magnetic fields, potentials, and charges with currents, generally speaking.
The derivation of the Navier–Stokes equations as well as its application and formulation for different families of fluids, is an important exercise in fluid dynamics with applications in mechanical engineering, physics, chemistry, heat transfer, and electrical engineering. A proof explaining the properties and bounds of the equations, such as Navier–Stokes existence and smoothness, is one of the important unsolved problems in mathematics.
In mathematics – specifically, in stochastic analysis – an Itô diffusion is a solution to a specific type of stochastic differential equation. That equation is similar to the Langevin equation used in physics to describe the Brownian motion of a particle subjected to a potential in a viscous fluid. Itô diffusions are named after the Japanese mathematician Kiyosi Itô.
The obstacle problem is a classic motivating example in the mathematical study of variational inequalities and free boundary problems. The problem is to find the equilibrium position of an elastic membrane whose boundary is held fixed, and which is constrained to lie above a given obstacle. It is deeply related to the study of minimal surfaces and the capacity of a set in potential theory as well. Applications include the study of fluid filtration in porous media, constrained heating, elasto-plasticity, optimal control, and financial mathematics.
Chapman–Enskog theory provides a framework in which equations of hydrodynamics for a gas can be derived from the Boltzmann equation. The technique justifies the otherwise phenomenological constitutive relations appearing in hydrodynamical descriptions such as the Navier–Stokes equations. In doing so, expressions for various transport coefficients such as thermal conductivity and viscosity are obtained in terms of molecular parameters. Thus, Chapman–Enskog theory constitutes an important step in the passage from a microscopic, particle-based description to a continuum hydrodynamical one.
Stokes' theorem, also known as the Kelvin–Stokes theorem after Lord Kelvin and George Stokes, the fundamental theorem for curls or simply the curl theorem, is a theorem in vector calculus on . Given a vector field, the theorem relates the integral of the curl of the vector field over some surface, to the line integral of the vector field around the boundary of the surface. The classical theorem of Stokes can be stated in one sentence: The line integral of a vector field over a loop is equal to the surface integral of its curl over the enclosed surface. It is illustrated in the figure, where the direction of positive circulation of the bounding contour ∂Σ, and the direction n of positive flux through the surface Σ, are related by a right-hand-rule. For the right hand the fingers circulate along ∂Σ and the thumb is directed along n.
Heat transfer physics describes the kinetics of energy storage, transport, and energy transformation by principal energy carriers: phonons, electrons, fluid particles, and photons. Heat is thermal energy stored in temperature-dependent motion of particles including electrons, atomic nuclei, individual atoms, and molecules. Heat is transferred to and from matter by the principal energy carriers. The state of energy stored within matter, or transported by the carriers, is described by a combination of classical and quantum statistical mechanics. The energy is different made (converted) among various carriers. The heat transfer processes are governed by the rates at which various related physical phenomena occur, such as the rate of particle collisions in classical mechanics. These various states and kinetics determine the heat transfer, i.e., the net rate of energy storage or transport. Governing these process from the atomic level to macroscale are the laws of thermodynamics, including conservation of energy.
Lagrangian field theory is a formalism in classical field theory. It is the field-theoretic analogue of Lagrangian mechanics. Lagrangian mechanics is used to analyze the motion of a system of discrete particles each with a finite number of degrees of freedom. Lagrangian field theory applies to continua and fields, which have an infinite number of degrees of freedom.