Nondimensionalization is the partial or full removal of physical dimensions from an equation involving physical quantities by a suitable substitution of variables. This technique can simplify and parameterize problems where measured units are involved. It is closely related to dimensional analysis. In some physical systems, the term scaling is used interchangeably with nondimensionalization, in order to suggest that certain quantities are better measured relative to some appropriate unit. These units refer to quantities intrinsic to the system, rather than units such as SI units. Nondimensionalization is not the same as converting extensive quantities in an equation to intensive quantities, since the latter procedure results in variables that still carry units. [1]
Nondimensionalization can also recover characteristic properties of a system. For example, if a system has an intrinsic resonance frequency, length, or time constant, nondimensionalization can recover these values. The technique is especially useful for systems that can be described by differential equations. One important use is in the analysis of control systems. One of the simplest characteristic units is the doubling time of a system experiencing exponential growth, or conversely the half-life of a system experiencing exponential decay; a more natural pair of characteristic units is mean age/mean lifetime, which correspond to base e rather than base 2.
Many illustrative examples of nondimensionalization originate from simplifying differential equations. This is because a large body of physical problems can be formulated in terms of differential equations. Consider the following:
Although nondimensionalization is well adapted for these problems, it is not restricted to them. An example of a non-differential-equation application is dimensional analysis; another example is normalization in statistics.
Measuring devices are practical examples of nondimensionalization occurring in everyday life. Measuring devices are calibrated relative to some known unit. Subsequent measurements are made relative to this standard. Then, the absolute value of the measurement is recovered by scaling with respect to the standard.
Suppose a pendulum is swinging with a particular period T. For such a system, it is advantageous to perform calculations relating to the swinging relative to T. In some sense, this is normalizing the measurement with respect to the period.
Measurements made relative to an intrinsic property of a system will apply to other systems which also have the same intrinsic property. It also allows one to compare a common property of different implementations of the same system. Nondimensionalization determines in a systematic manner the characteristic units of a system to use, without relying heavily on prior knowledge of the system's intrinsic properties (one should not confuse characteristic units of a system with natural units of nature). In fact, nondimensionalization can suggest the parameters which should be used for analyzing a system. However, it is necessary to start with an equation that describes the system appropriately.
To nondimensionalize a system of equations, one must do the following:
The last three steps are usually specific to the problem where nondimensionalization is applied. However, almost all systems require the first two steps to be performed.
There are no restrictions on the variable names used to replace "x" and "t". However, they are generally chosen so that it is convenient and intuitive to use for the problem at hand. For example, if "x" represented mass, the letter "m" might be an appropriate symbol to represent the dimensionless mass quantity.
In this article, the following conventions have been used:
A subscript 'c' added to a quantity's variable name is used to denote the characteristic unit used to scale that quantity. For example, if x is a quantity, then xc is the characteristic unit used to scale it.
As an illustrative example, consider a first order differential equation with constant coefficients:
(1) |
(2) |
Suppose for simplicity that a certain system is characterized by two variables – a dependent variable x and an independent variable t, where x is a function of t. Both x and t represent quantities with units. To scale these two variables, assume there are two intrinsic units of measurement xc and tc with the same units as x and t respectively, such that these conditions hold:
These equations are used to replace x and t when nondimensionalizing. If differential operators are needed to describe the original system, their scaled counterparts become dimensionless differential operators.
Consider the relationship
The dimensionless differential operators with respect to the independent variable becomes
If a system has a forcing function then
Hence, the new forcing function is made to be dependent on the dimensionless quantity .
Consider the differential equation for a first order system:
The derivation of the characteristic units to Eq. 1 and Eq. 2 for this system gave
A second order system has the form
Replace the variables x and t with their scaled quantities. The equation becomes
This new equation is not dimensionless, although all the variables with units are isolated in the coefficients. Dividing by the coefficient of the highest ordered term, the equation becomes
Now it is necessary to determine the quantities of xc and tc so that the coefficients become normalized. Since there are two free parameters, at most only two coefficients can be made to equal unity.
Consider the variable tc:
Both substitutions are valid. However, for pedagogical reasons, the latter substitution is used for second order systems. Choosing this substitution allows xc to be determined by normalizing the coefficient of the forcing function:
The differential equation becomes
The coefficient of the first order term is unitless. Define
The factor 2 is present so that the solutions can be parameterized in terms of ζ. In the context of mechanical or electrical systems, ζ is known as the damping ratio, and is an important parameter required in the analysis of control systems. 2ζ is also known as the linewidth of the system. The result of the definition is the universal oscillator equation.
The general nth order linear differential equation with constant coefficients has the form:
The function f(t) is known as the forcing function.
If the differential equation only contains real (not complex) coefficients, then the properties of such a system behaves as a mixture of first and second order systems only. This is because the roots of its characteristic polynomial are either real, or complex conjugate pairs. Therefore, understanding how nondimensionalization applies to first and second ordered systems allows the properties of higher order systems to be determined through superposition.
The number of free parameters in a nondimensionalized form of a system increases with its order. For this reason, nondimensionalization is rarely used for higher order differential equations. The need for this procedure has also been reduced with the advent of symbolic computation.
A variety of systems can be approximated as either first or second order systems. These include mechanical, electrical, fluidic, caloric, and torsional systems. This is because the fundamental physical quantities involved within each of these examples are related through first and second order derivatives.
Suppose we have a mass attached to a spring and a damper, which in turn are attached to a wall, and a force acting on the mass along the same line. Define
Suppose the applied force is a sinusoid F = F0 cos(ωt), the differential equation that describes the motion of the block is
Nondimensionalizing this equation the same way as described under § Second order system yields several characteristics of the system:
For a series RC attached to a voltage source with substitutions
The first characteristic unit corresponds to the total charge in the circuit. The second characteristic unit corresponds to the time constant for the system.
For a series configuration of R, C, L components where Q is the charge in the system with the substitutions
The first variable corresponds to the maximum charge stored in the circuit. The resonance frequency is given by the reciprocal of the characteristic time. The last expression is the linewidth of the system. The Ω can be considered as a normalized forcing function frequency.
The Schrödinger equation for the one-dimensional time independent quantum harmonic oscillator is
The modulus square of the wavefunction |ψ(x)|2 represents probability density that, when integrated over x, gives a dimensionless probability. Therefore, |ψ(x)|2 has units of inverse length. To nondimensionalize this, it must be rewritten as a function of a dimensionless variable. To do this, we substitute where xc is some characteristic length of this system. This gives us a dimensionless wave function defined via
The differential equation then becomes
To make the term in front of dimensionless, set
The fully nondimensionalized equation is where we have defined The factor in front of is in fact (coincidentally) the ground state energy of the harmonic oscillator. Usually, the energy term is not made dimensionless as we are interested in determining the energies of the quantum states. Rearranging the first equation, the familiar equation for the harmonic oscillator becomes
In statistics, the analogous process is usually dividing a difference (a distance) by a scale factor (a measure of statistical dispersion), which yields a dimensionless number, which is called normalization. Most often, this is dividing errors or residuals by the standard deviation or sample standard deviation, respectively, yielding standard scores and studentized residuals.
In physics, equations of motion are equations that describe the behavior of a physical system in terms of its motion as a function of time. More specifically, the equations of motion describe the behavior of a physical system as a set of mathematical functions in terms of dynamic variables. These variables are usually spatial coordinates and time, but may include momentum components. The most general choice are generalized coordinates which can be any convenient variables characteristic of the physical system. The functions are defined in a Euclidean space in classical mechanics, but are replaced by curved spaces in relativity. If the dynamics of a system is known, the equations are the solutions for the differential equations describing the motion of the dynamics.
In quantum chemistry and molecular physics, the Born–Oppenheimer (BO) approximation is the best-known mathematical approximation in molecular dynamics. Specifically, it is the assumption that the wave functions of atomic nuclei and electrons in a molecule can be treated separately, based on the fact that the nuclei are much heavier than the electrons. Due to the larger relative mass of a nucleus compared to an electron, the coordinates of the nuclei in a system are approximated as fixed, while the coordinates of the electrons are dynamic. The approach is named after Max Born and his 23-year-old graduate student J. Robert Oppenheimer, the latter of whom proposed it in 1927 during a period of intense fervent in the development of quantum mechanics.
In physics, a Langevin equation is a stochastic differential equation describing how a system evolves when subjected to a combination of deterministic and fluctuating ("random") forces. The dependent variables in a Langevin equation typically are collective (macroscopic) variables changing only slowly in comparison to the other (microscopic) variables of the system. The fast (microscopic) variables are responsible for the stochastic nature of the Langevin equation. One application is to Brownian motion, which models the fluctuating motion of a small particle in a fluid.
Fractional calculus is a branch of mathematical analysis that studies the several different possibilities of defining real number powers or complex number powers of the differentiation operator
In condensed matter physics, Bloch's theorem states that solutions to the Schrödinger equation in a periodic potential can be expressed as plane waves modulated by periodic functions. The theorem is named after the Swiss physicist Felix Bloch, who discovered the theorem in 1929. Mathematically, they are written
The fluctuation–dissipation theorem (FDT) or fluctuation–dissipation relation (FDR) is a powerful tool in statistical physics for predicting the behavior of systems that obey detailed balance. Given that a system obeys detailed balance, the theorem is a proof that thermodynamic fluctuations in a physical variable predict the response quantified by the admittance or impedance of the same physical variable, and vice versa. The fluctuation–dissipation theorem applies both to classical and quantum mechanical systems.
The Hamiltonian constraint arises from any theory that admits a Hamiltonian formulation and is reparametrisation-invariant. The Hamiltonian constraint of general relativity is an important non-trivial example.
The scaled inverse chi-squared distribution, where is the scale parameter, equals the univariate inverse Wishart distribution with degrees of freedom .
In physics, the algebra of physical space (APS) is the use of the Clifford or geometric algebra Cl3,0(R) of the three-dimensional Euclidean space as a model for (3+1)-dimensional spacetime, representing a point in spacetime via a paravector.
The finite potential well is a concept from quantum mechanics. It is an extension of the infinite potential well, in which a particle is confined to a "box", but one which has finite potential "walls". Unlike the infinite potential well, there is a probability associated with the particle being found outside the box. The quantum mechanical interpretation is unlike the classical interpretation, where if the total energy of the particle is less than the potential energy barrier of the walls it cannot be found outside the box. In the quantum interpretation, there is a non-zero probability of the particle being outside the box even when the energy of the particle is less than the potential energy barrier of the walls.
In mathematics, delay differential equations (DDEs) are a type of differential equation in which the derivative of the unknown function at a certain time is given in terms of the values of the function at previous times. DDEs are also called time-delay systems, systems with aftereffect or dead-time, hereditary systems, equations with deviating argument, or differential-difference equations. They belong to the class of systems with the functional state, i.e. partial differential equations (PDEs) which are infinite dimensional, as opposed to ordinary differential equations (ODEs) having a finite dimensional state vector. Four points may give a possible explanation of the popularity of DDEs:
In physics and fluid mechanics, a Blasius boundary layer describes the steady two-dimensional laminar boundary layer that forms on a semi-infinite plate which is held parallel to a constant unidirectional flow. Falkner and Skan later generalized Blasius' solution to wedge flow, i.e. flows in which the plate is not parallel to the flow.
The time-evolving block decimation (TEBD) algorithm is a numerical scheme used to simulate one-dimensional quantum many-body systems, characterized by at most nearest-neighbour interactions. It is dubbed Time-evolving Block Decimation because it dynamically identifies the relevant low-dimensional Hilbert subspaces of an exponentially larger original Hilbert space. The algorithm, based on the Matrix Product States formalism, is highly efficient when the amount of entanglement in the system is limited, a requirement fulfilled by a large class of quantum many-body systems in one dimension.
The Poisson–Boltzmann equation describes the distribution of the electric potential in solution in the direction normal to a charged surface. This distribution is important to determine how the electrostatic interactions will affect the molecules in solution. The Poisson–Boltzmann equation is derived via mean-field assumptions. From the Poisson–Boltzmann equation many other equations have been derived with a number of different assumptions.
The derivation of the Navier–Stokes equations as well as their application and formulation for different families of fluids, is an important exercise in fluid dynamics with applications in mechanical engineering, physics, chemistry, heat transfer, and electrical engineering. A proof explaining the properties and bounds of the equations, such as Navier–Stokes existence and smoothness, is one of the important unsolved problems in mathematics.
In mathematics — specifically, in stochastic analysis — the infinitesimal generator of a Feller process is a Fourier multiplier operator that encodes a great deal of information about the process.
The Hasse–Davenport relations, introduced by Davenport and Hasse, are two related identities for Gauss sums, one called the Hasse–Davenport lifting relation, and the other called the Hasse–Davenport product relation. The Hasse–Davenport lifting relation is an equality in number theory relating Gauss sums over different fields. Weil (1949) used it to calculate the zeta function of a Fermat hypersurface over a finite field, which motivated the Weil conjectures.
The Cauchy momentum equation is a vector partial differential equation put forth by Cauchy that describes the non-relativistic momentum transport in any continuum.
In fluid dynamics, a cnoidal wave is a nonlinear and exact periodic wave solution of the Korteweg–de Vries equation. These solutions are in terms of the Jacobi elliptic function cn, which is why they are coined cnoidal waves. They are used to describe surface gravity waves of fairly long wavelength, as compared to the water depth.
In mathematics, a continuous-time random walk (CTRW) is a generalization of a random walk where the wandering particle waits for a random time between jumps. It is a stochastic jump process with arbitrary distributions of jump lengths and waiting times. More generally it can be seen to be a special case of a Markov renewal process.