Nondimensionalization

Last updated

Nondimensionalization is the partial or full removal of physical dimensions from an equation involving physical quantities by a suitable substitution of variables. This technique can simplify and parameterize problems where measured units are involved. It is closely related to dimensional analysis. In some physical systems, the term scaling is used interchangeably with nondimensionalization, in order to suggest that certain quantities are better measured relative to some appropriate unit. These units refer to quantities intrinsic to the system, rather than units such as SI units. Nondimensionalization is not the same as converting extensive quantities in an equation to intensive quantities, since the latter procedure results in variables that still carry units.

Contents

Nondimensionalization can also recover characteristic properties of a system. For example, if a system has an intrinsic resonance frequency, length, or time constant, nondimensionalization can recover these values. The technique is especially useful for systems that can be described by differential equations. One important use is in the analysis of control systems. One of the simplest characteristic units is the doubling time of a system experiencing exponential growth, or conversely the half-life of a system experiencing exponential decay; a more natural pair of characteristic units is mean age/mean lifetime, which correspond to base e rather than base 2.

Many illustrative examples of nondimensionalization originate from simplifying differential equations. This is because a large body of physical problems can be formulated in terms of differential equations. Consider the following:

Although nondimensionalization is well adapted for these problems, it is not restricted to them. An example of a non-differential-equation application is dimensional analysis; another example is normalization in statistics.

Measuring devices are practical examples of nondimensionalization occurring in everyday life. Measuring devices are calibrated relative to some known unit. Subsequent measurements are made relative to this standard. Then, the absolute value of the measurement is recovered by scaling with respect to the standard.

Rationale

Suppose a pendulum is swinging with a particular period T. For such a system, it is advantageous to perform calculations relating to the swinging relative to T. In some sense, this is normalizing the measurement with respect to the period.

Measurements made relative to an intrinsic property of a system will apply to other systems which also have the same intrinsic property. It also allows one to compare a common property of different implementations of the same system. Nondimensionalization determines in a systematic manner the characteristic units of a system to use, without relying heavily on prior knowledge of the system's intrinsic properties (one should not confuse characteristic units of a system with natural units of nature). In fact, nondimensionalization can suggest the parameters which should be used for analyzing a system. However, it is necessary to start with an equation that describes the system appropriately.

Nondimensionalization steps

To nondimensionalize a system of equations, one must do the following:

  1. Identify all the independent and dependent variables;
  2. Replace each of them with a quantity scaled relative to a characteristic unit of measure to be determined;
  3. Divide through by the coefficient of the highest order polynomial or derivative term;
  4. Choose judiciously the definition of the characteristic unit for each variable so that the coefficients of as many terms as possible become 1;
  5. Rewrite the system of equations in terms of their new dimensionless quantities.

The last three steps are usually specific to the problem where nondimensionalization is applied. However, almost all systems require the first two steps to be performed.

Conventions

There are no restrictions on the variable names used to replace "x" and "t". However, they are generally chosen so that it is convenient and intuitive to use for the problem at hand. For example, if "x" represented mass, the letter "m" might be an appropriate symbol to represent the dimensionless mass quantity.

In this article, the following conventions have been used:

A subscripted c added to a quantity's variable name is used to denote the characteristic unit used to scale that quantity. For example, if x is a quantity, then xc is the characteristic unit used to scale it.

As an illustrative example, consider a first order differential equation with constant coefficients:

  1. In this equation the independent variable here is t, and the dependent variable is x.
  2. Set . This results in the equation
  3. The coefficient of the highest ordered term is in front of the first derivative term. Dividing by this gives
  4. The coefficient in front of only contains one characteristic variable tc, hence it is easiest to choose to set this to unity first:
    Subsequently,
  5. The final dimensionless equation in this case becomes completely independent of any parameters with units:

Substitutions

Suppose for simplicity that a certain system is characterized by two variables - a dependent variable x and an independent variable t, where x is a function of t. Both x and t represent quantities with units. To scale these two variables, assume there are two intrinsic units of measurement xc and tc with the same units as x and t respectively, such that these conditions hold:

These equations are used to replace x and t when nondimensionalizing. If differential operators are needed to describe the original system, their scaled counterparts become dimensionless differential operators.

Differential operators

Consider the relationship

The dimensionless differential operators with respect to the independent variable becomes

Forcing function

If a system has a forcing function then

Hence, the new forcing function is made to be dependent on the dimensionless quantity .

Linear differential equations with constant coefficients

First order system

Consider the differential equation for a first order system:

The derivation of the characteristic units for this system gives

Second order system

A second order system has the form

Substitution step

Replace the variables x and t with their scaled quantities. The equation becomes

This new equation is not dimensionless, although all the variables with units are isolated in the coefficients. Dividing by the coefficient of the highest ordered term, the equation becomes

Now it is necessary to determine the quantities of xc and tc so that the coefficients become normalized. Since there are two free parameters, at most only two coefficients can be made to equal unity.

Determination of characteristic units

Consider the variable tc:

  1. If the first order term is normalized.
  2. If the zeroth order term is normalized.

Both substitutions are valid. However, for pedagogical reasons, the latter substitution is used for second order systems. Choosing this substitution allows xc to be determined by normalizing the coefficient of the forcing function:

The differential equation becomes

The coefficient of the first order term is unitless. Define

The factor 2 is present so that the solutions can be parameterized in terms of ζ. In the context of mechanical or electrical systems, ζ is known as the damping ratio, and is an important parameter required in the analysis of control systems. 2ζ is also known as the linewidth of the system. The result of the definition is the universal oscillator equation.

Higher order systems

The general n-th order linear differential equation with constant coefficients has the form:

The function f(t) is known as the forcing function.

If the differential equation only contains real (not complex) coefficients, then the properties of such a system behaves as a mixture of first and second order systems only. This is because the roots of its characteristic polynomial are either real, or complex conjugate pairs. Therefore, understanding how nondimensionalization applies to first and second ordered systems allows the properties of higher order systems to be determined through superposition.

The number of free parameters in a nondimensionalized form of a system increases with its order. For this reason, nondimensionalization is rarely used for higher order differential equations. The need for this procedure has also been reduced with the advent of symbolic computation.

Examples of recovering characteristic units

A variety of systems can be approximated as either first or second order systems. These include mechanical, electrical, fluidic, caloric, and torsional systems. This is because the fundamental physical quantities involved within each of these examples are related through first and second order derivatives.

Mechanical oscillations

A mass attached to a spring and a damper. Mass-Spring-Damper.png
A mass attached to a spring and a damper.

Suppose we have a mass attached to a spring and a damper, which in turn are attached to a wall, and a force acting on the mass along the same line. Define

  • = displacement from equilibrium [m]
  • = time [s]
  • = external force or "disturbance" applied to system [kg⋅m⋅s−2]
  • = mass of the block [kg]
  • = damping constant of dashpot [kg⋅s−1]
  • = force constant of spring [kg⋅s−2]

Suppose the applied force is a sinusoid F = F0 cos(ωt), the differential equation that describes the motion of the block is

Nondimensionalizing this equation the same way as described under second order system yields several characteristics of the system.

The intrinsic unit xc corresponds to the distance the block moves per unit force

The characteristic variable tc is equal to the period of the oscillations

and the dimensionless variable 2ζ corresponds to the linewidth of the system. ζ itself is the damping ratio.

Electrical oscillations

First-order series RC circuit

For a series RC attached to a voltage source

with substitutions

The first characteristic unit corresponds to the total charge in the circuit. The second characteristic unit corresponds to the time constant for the system.

Second-order series RLC circuit

For a series configuration of R,C,L components where Q is the charge in the system

with the substitutions

The first variable corresponds to the maximum charge stored in the circuit. The resonance frequency is given by the reciprocal of the characteristic time. The last expression is the linewidth of the system. The Ω can be considered as a normalized forcing function frequency.

Quantum mechanics

Quantum harmonic oscillator

The Schrödinger equation for the one-dimensional time independent quantum harmonic oscillator is

The modulus square of the wavefunction |ψ(x)|2 represents probability density that, when integrated over x, gives a dimensionless probability. Therefore, |ψ(x)|2 has units of inverse length. To nondimensionalize this, it must be rewritten as a function of a dimensionless variable. To do this, we substitute

where xc is some characteristic length of this system. This gives us a dimensionless wave function defined via

The differential equation then becomes

To make the term in front of dimensionless, set

The fully nondimensionalized equation is

where we have defined

The factor in front of is in fact (coincidentally) the ground state energy of the harmonic oscillator. Usually, the energy term is not made dimensionless as we are interested in determining the energies of the quantum states. Rearranging the first equation, the familiar equation for the harmonic oscillator becomes

Statistical analogs

In statistics, the analogous process is usually dividing a difference (a distance) by a scale factor (a measure of statistical dispersion), which yields a dimensionless number, which is called normalization. Most often, this is dividing errors or residuals by the standard deviation or sample standard deviation, respectively, yielding standard scores and studentized residuals.

See also

Related Research Articles

In classical mechanics, a harmonic oscillator is a system that, when displaced from its equilibrium position, experiences a restoring force F proportional to the displacement x:

<span class="mw-page-title-main">Equations of motion</span> Equations that describe the behavior of a physical system

In physics, equations of motion are equations that describe the behavior of a physical system in terms of its motion as a function of time. More specifically, the equations of motion describe the behavior of a physical system as a set of mathematical functions in terms of dynamic variables. These variables are usually spatial coordinates and time, but may include momentum components. The most general choice are generalized coordinates which can be any convenient variables characteristic of the physical system. The functions are defined in a Euclidean space in classical mechanics, but are replaced by curved spaces in relativity. If the dynamics of a system is known, the equations are the solutions for the differential equations describing the motion of the dynamics.

In quantum chemistry and molecular physics, the Born–Oppenheimer (BO) approximation is the best-known mathematical approximation in molecular dynamics. Specifically, it is the assumption that the wave functions of atomic nuclei and electrons in a molecule can be treated separately, based on the fact that the nuclei are much heavier than the electrons. Due to the larger relative mass of a nucleus compared to an electron, the coordinates of the nuclei in a system are approximated as fixed, while the coordinates of the electrons are dynamic. The approach is named after Max Born and his 23-year-old graduate student J. Robert Oppenheimer, the latter of whom proposed it in 1927 during a period of intense ferment in the development of quantum mechanics.

<span class="mw-page-title-main">Noether's theorem</span> Statement relating differentiable symmetries to conserved quantities

Noether's theorem states that every continuous symmetry of the action of a physical system with conservative forces has a corresponding conservation law. This is the first of two theorems proven by mathematician Emmy Noether in 1915 and published in 1918. The action of a physical system is the integral over time of a Lagrangian function, from which the system's behavior can be determined by the principle of least action. This theorem only applies to continuous and smooth symmetries of physical space.

The adiabatic theorem is a concept in quantum mechanics. Its original form, due to Max Born and Vladimir Fock (1928), was stated as follows:

The Hamiltonian constraint arises from any theory that admits a Hamiltonian formulation and is reparametrisation-invariant. The Hamiltonian constraint of general relativity is an important non-trivial example.

In mathematics, the Gateaux differential or Gateaux derivative is a generalization of the concept of directional derivative in differential calculus. Named after René Gateaux, it is defined for functions between locally convex topological vector spaces such as Banach spaces. Like the Fréchet derivative on a Banach space, the Gateaux differential is often used to formalize the functional derivative commonly used in the calculus of variations and physics.

In physics, the algebra of physical space (APS) is the use of the Clifford or geometric algebra Cl3,0(R) of the three-dimensional Euclidean space as a model for (3+1)-dimensional spacetime, representing a point in spacetime via a paravector.

The finite potential well is a concept from quantum mechanics. It is an extension of the infinite potential well, in which a particle is confined to a "box", but one which has finite potential "walls". Unlike the infinite potential well, there is a probability associated with the particle being found outside the box. The quantum mechanical interpretation is unlike the classical interpretation, where if the total energy of the particle is less than the potential energy barrier of the walls it cannot be found outside the box. In the quantum interpretation, there is a non-zero probability of the particle being outside the box even when the energy of the particle is less than the potential energy barrier of the walls.

In mathematics, delay differential equations (DDEs) are a type of differential equation in which the derivative of the unknown function at a certain time is given in terms of the values of the function at previous times. DDEs are also called time-delay systems, systems with aftereffect or dead-time, hereditary systems, equations with deviating argument, or differential-difference equations. They belong to the class of systems with the functional state, i.e. partial differential equations (PDEs) which are infinite dimensional, as opposed to ordinary differential equations (ODEs) having a finite dimensional state vector. Four points may give a possible explanation of the popularity of DDEs:

  1. Aftereffect is an applied problem: it is well known that, together with the increasing expectations of dynamic performances, engineers need their models to behave more like the real process. Many processes include aftereffect phenomena in their inner dynamics. In addition, actuators, sensors, and communication networks that are now involved in feedback control loops introduce such delays. Finally, besides actual delays, time lags are frequently used to simplify very high order models. Then, the interest for DDEs keeps on growing in all scientific areas and, especially, in control engineering.
  2. Delay systems are still resistant to many classical controllers: one could think that the simplest approach would consist in replacing them by some finite-dimensional approximations. Unfortunately, ignoring effects which are adequately represented by DDEs is not a general alternative: in the best situation, it leads to the same degree of complexity in the control design. In worst cases, it is potentially disastrous in terms of stability and oscillations.
  3. Voluntary introduction of delays can benefit the control system.
  4. In spite of their complexity, DDEs often appear as simple infinite-dimensional models in the very complex area of partial differential equations (PDEs).

In physics and fluid mechanics, a Blasius boundary layer describes the steady two-dimensional laminar boundary layer that forms on a semi-infinite plate which is held parallel to a constant unidirectional flow. Falkner and Skan later generalized Blasius' solution to wedge flow, i.e. flows in which the plate is not parallel to the flow.

The Poisson–Boltzmann equation describes the distribution of the electric potential in solution in the direction normal to a charged surface. This distribution is important to determine how the electrostatic interactions will affect the molecules in solution. The Poisson–Boltzmann equation is derived via mean-field assumptions. From the Poisson–Boltzmann equation many other equations have been derived with a number of different assumptions.

The theoretical and experimental justification for the Schrödinger equation motivates the discovery of the Schrödinger equation, the equation that describes the dynamics of nonrelativistic particles. The motivation uses photons, which are relativistic particles with dynamics described by Maxwell's equations, as an analogue for all types of particles.

A linear response function describes the input-output relationship of a signal transducer, such as a radio turning electromagnetic waves into music or a neuron turning synaptic input into a response. Because of its many applications in information theory, physics and engineering there exist alternative names for specific linear response functions such as susceptibility, impulse response or impedance; see also transfer function. The concept of a Green's function or fundamental solution of an ordinary differential equation is closely related.

The derivation of the Navier–Stokes equations as well as its application and formulation for different families of fluids, is an important exercise in fluid dynamics with applications in mechanical engineering, physics, chemistry, heat transfer, and electrical engineering. A proof explaining the properties and bounds of the equations, such as Navier–Stokes existence and smoothness, is one of the important unsolved problems in mathematics.

The Hasse–Davenport relations, introduced by Davenport and Hasse, are two related identities for Gauss sums, one called the Hasse–Davenport lifting relation, and the other called the Hasse–Davenport product relation. The Hasse–Davenport lifting relation is an equality in number theory relating Gauss sums over different fields. Weil (1949) used it to calculate the zeta function of a Fermat hypersurface over a finite field, which motivated the Weil conjectures.

The Cauchy momentum equation is a vector partial differential equation put forth by Cauchy that describes the non-relativistic momentum transport in any continuum.

In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation, Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.

In mathematics, a continuous-time random walk (CTRW) is a generalization of a random walk where the wandering particle waits for a random time between jumps. It is a stochastic jump process with arbitrary distributions of jump lengths and waiting times. More generally it can be seen to be a special case of a Markov renewal process.

Tau functions are an important ingredient in the modern mathematical theory of integrable systems, and have numerous applications in a variety of other domains. They were originally introduced by Ryogo Hirota in his direct method approach to soliton equations, based on expressing them in an equivalent bilinear form.