Part of a series on |
War (outline) |
---|
Lanchester's laws are mathematical formulas for calculating the relative strengths of military forces. The Lanchester equations are differential equations describing the time dependence of two armies' strengths A and B as a function of time, with the function depending only on A and B. [1] [2]
In 1915 and 1916 during World War I, M. Osipov [3] : vii–viii and Frederick Lanchester independently devised a series of differential equations to demonstrate the power relationships between opposing forces. [4] Among these are what is known as Lanchester's linear law (for ancient combat) and Lanchester's square law (for modern combat with long-range weapons such as firearms).
As of 2017 modified variations of the Lanchester equations continue to form the basis of analysis in many of the US Army’s combat simulations, [5] and in 2016 a RAND Corporation report examined by these laws the probable outcome in the event of a Russian invasion into the Baltic nations of Estonia, Latvia, and Lithuania. [6]
For ancient combat, between phalanxes of soldiers with spears for example, one soldier could only ever fight exactly one other soldier at a time. If each soldier kills, and is killed by, exactly one other, then the number of soldiers remaining at the end of the battle is simply the difference between the larger army and the smaller, assuming identical weapons.
The linear law also applies to unaimed fire into an enemy-occupied area. The rate of attrition depends on the density of the available targets in the target area as well as the number of weapons shooting. If two forces, occupying the same land area and using the same weapons, shoot randomly into the same target area, they will both suffer the same rate and number of casualties, until the smaller force is eventually eliminated: the greater probability of any one shot hitting the larger force is balanced by the greater number of shots directed at the smaller force.
Lanchester's square law is also known as the N-square law.
With firearms engaging each other directly with aimed shooting from a distance, they can attack multiple targets and can receive fire from multiple directions. The rate of attrition now depends only on the number of weapons shooting. Lanchester determined that the power of such a force is proportional not to the number of units it has, but to the square of the number of units. This is known as Lanchester's square law.
More precisely, the law specifies the casualties a shooting force will inflict over a period of time, relative to those inflicted by the opposing force. In its basic form, the law is only useful to predict outcomes and casualties by attrition. It does not apply to whole armies, where tactical deployment means not all troops will be engaged all the time. It only works where each unit (soldier, ship, etc.) can kill only one equivalent unit at a time. For this reason, the law does not apply to machine guns, artillery with unguided munitions, or nuclear weapons. The law requires an assumption that casualties accumulate over time: it does not work in situations in which opposing troops kill each other instantly, either by shooting simultaneously or by one side getting off the first shot and inflicting multiple casualties.
Note that Lanchester's square law does not apply to technological force, only numerical force; so it requires an N-squared-fold increase in quality to compensate for an N-fold decrease in quantity.
Suppose that two armies, Red and Blue, are engaging each other in combat. Red is shooting a continuous stream of bullets at Blue. Meanwhile, Blue is shooting a continuous stream of bullets at Red.
Let symbol A represent the number of soldiers in the Red force. Each one has offensive firepower α, which is the number of enemy soldiers it can incapacitate (e.g., kill or injure) per unit time. Likewise, Blue has B soldiers, each with offensive firepower β.
Lanchester's square law calculates the number of soldiers lost on each side using the following pair of equations. [7] Here, dA/dt represents the rate at which the number of Red soldiers is changing at a particular instant. A negative value indicates the loss of soldiers. Similarly, dB/dt represents the rate of change of the number of Blue soldiers.
The solution to these equations shows that:
The first three of these conclusions are obvious. The final one is the origin of the name "square law".
Lanchester's equations are related to the more recent salvo combat model equations, with two main differences.
First, Lanchester's original equations form a continuous time model, whereas the basic salvo equations form a discrete time model. In a gun battle, bullets or shells are typically fired in large quantities. Each round has a relatively low chance of hitting its target, and does a relatively small amount of damage. Therefore, Lanchester's equations model gunfire as a stream of firepower that continuously weakens the enemy force over time.
By comparison, cruise missiles typically are fired in relatively small quantities. Each one has a high probability of hitting its target, and carries a relatively powerful warhead. Therefore, it makes more sense to model them as a discrete pulse (or salvo) of firepower in a discrete time model.
Second, Lanchester's equations include only offensive firepower, whereas the salvo equations also include defensive firepower. Given their small size and large number, it is not practical to intercept bullets and shells in a gun battle. By comparison, cruise missiles can be intercepted (shot down) by surface-to-air missiles and anti-aircraft guns. Therefore, missile combat models include those active defenses.
Lanchester's laws have been used to model historical battles for research purposes. Examples include Pickett's Charge of Confederate infantry against Union infantry during the 1863 Battle of Gettysburg, [8] the 1940 Battle of Britain between the British and German air forces, [9] and the Battle of Kursk. [10]
In modern warfare, to take into account that to some extent both linear and the square apply often, an exponent of 1.5 is used. [11] [12] [3] : 7-5–7-8 Lanchester's laws have also been used to model guerrilla warfare. [13] The laws have also been applied to repeat battles with a range of inter-battle reinforcement strategies. [14]
Attempts have been made to apply Lanchester's laws to conflicts between animal groups. [15] Examples include tests with chimpanzees [16] and ants. The chimpanzee application was relatively successful. A study of Australian meat ants and Argentine ants confirmed the square law, [17] but a study of fire ants did not confirm the square law. [18]
The Helmbold Parameters provide quick, concise, exact numerical indices, soundly based on historical data, for comparing battles with respect to their bitterness and the degree to which side had the advantage. While their definition is modeled after a solution of the Lanchester Square Law's differential equations, their numerical values are based entirely on the initial and final strengths of the opponents and in no way depend upon the validity of Lanchester's Square Law as a model of attrition during the course of a battle.
The solution of Lanchester's Square Law used here can be written as:
Where:
If the initial and final strengths of the two sides are known it is possible to solve for the parameters , , , and . If the battle duration is also known, then it is possible to solve for . [19] [20] [21]
If, as is normally the case, is small enough that the hyperbolic functions can, without any significant error, be replaced by their series expansion up to terms in the first power of , and if abbreviations adopted for the casualty fractions are and , then the approximate relations that hold include and . [22] That is a kind of "average" (specifically, the geometric mean) of the casualty fractions justifies using it as an index of the bitterness of the battle.
Statistical work prefers natural logarithms of the Helmbold Parameters. They are noted , , and .
See Helmbold (2021):
Some observers have noticed a similar post-WWII decline in casualties at the level of wars instead of battles. [28] [29] [30] [31]
Linear elasticity is a mathematical model as to how solid objects deform and become internally stressed by prescribed loading conditions. It is a simplification of the more general nonlinear theory of elasticity and a branch of continuum mechanics.
A Newtonian fluid is a fluid in which the viscous stresses arising from its flow are at every point linearly correlated to the local strain rate — the rate of change of its deformation over time. Stresses are proportional to the rate of change of the fluid's velocity vector.
In the general theory of relativity, the Einstein field equations relate the geometry of spacetime to the distribution of matter within it.
A granular material is a conglomeration of discrete solid, macroscopic particles characterized by a loss of energy whenever the particles interact. The constituents that compose granular material are large enough such that they are not subject to thermal motion fluctuations. Thus, the lower size limit for grains in granular material is about 1 μm. On the upper size limit, the physics of granular materials may be applied to ice floes where the individual grains are icebergs and to asteroid belts of the Solar System with individual grains being asteroids.
Viscosity depends strongly on temperature. In liquids it usually decreases with increasing temperature, whereas, in most gases, viscosity increases with increasing temperature. This article discusses several models of this dependence, ranging from rigorous first-principles calculations for monatomic gases, to empirical correlations for liquids.
A neo-Hookean solid is a hyperelastic material model, similar to Hooke's law, that can be used for predicting the nonlinear stress–strain behavior of materials undergoing large deformations. The model was proposed by Ronald Rivlin in 1948 using invariants, though Mooney had already described a version in stretch form in 1940, and Wall had noted the equivalence in shear with the Hooke model in 1942.
In electromagnetics, the antenna factor is defined as the ratio of the electric field E to the voltage V induced across the terminals of an antenna:
The Maxwell stress tensor is a symmetric second-order tensor in three dimensions that is used in classical electromagnetism to represent the interaction between electromagnetic forces and mechanical momentum. In simple situations, such as a point charge moving freely in a homogeneous magnetic field, it is easy to calculate the forces on the charge from the Lorentz force law. When the situation becomes more complicated, this ordinary procedure can become impractically difficult, with equations spanning multiple lines. It is therefore convenient to collect many of these terms in the Maxwell stress tensor, and to use tensor arithmetic to find the answer to the problem at hand.
In multivariate statistics, if is a vector of random variables, and is an -dimensional symmetric matrix, then the scalar quantity is known as a quadratic form in .
The covariant formulation of classical electromagnetism refers to ways of writing the laws of classical electromagnetism in a form that is manifestly invariant under Lorentz transformations, in the formalism of special relativity using rectilinear inertial coordinate systems. These expressions both make it simple to prove that the laws of classical electromagnetism take the same form in any inertial coordinate system, and also provide a way to translate the fields and forces from one frame to another. However, this is not as general as Maxwell's equations in curved spacetime or non-rectilinear coordinate systems.
A vertex model is a type of statistical mechanics model in which the Boltzmann weights are associated with a vertex in the model. This contrasts with a nearest-neighbour model, such as the Ising model, in which the energy, and thus the Boltzmann weight of a statistical microstate is attributed to the bonds connecting two neighbouring particles. The energy associated with a vertex in the lattice of particles is thus dependent on the state of the bonds which connect it to adjacent vertices. It turns out that every solution of the Yang–Baxter equation with spectral parameters in a tensor product of vector spaces yields an exactly-solvable vertex model.
The Newman–Penrose (NP) formalism is a set of notation developed by Ezra T. Newman and Roger Penrose for general relativity (GR). Their notation is an effort to treat general relativity in terms of spinor notation, which introduces complex forms of the usual variables used in GR. The NP formalism is itself a special case of the tetrad formalism, where the tensors of the theory are projected onto a complete vector basis at each point in spacetime. Usually this vector basis is chosen to reflect some symmetry of the spacetime, leading to simplified expressions for physical observables. In the case of the NP formalism, the vector basis chosen is a null tetrad: a set of four null vectors—two real, and a complex-conjugate pair. The two real members often asymptotically point radially inward and radially outward, and the formalism is well adapted to treatment of the propagation of radiation in curved spacetime. The Weyl scalars, derived from the Weyl tensor, are often used. In particular, it can be shown that one of these scalars— in the appropriate frame—encodes the outgoing gravitational radiation of an asymptotically flat system.
In electrical engineering, dielectric loss quantifies a dielectric material's inherent dissipation of electromagnetic energy. It can be parameterized in terms of either the loss angleδ or the corresponding loss tangenttan(δ). Both refer to the phasor in the complex plane whose real and imaginary parts are the resistive (lossy) component of an electromagnetic field and its reactive (lossless) counterpart.
Bayesian linear regression is a type of conditional modeling in which the mean of one variable is described by a linear combination of other variables, with the goal of obtaining the posterior probability of the regression coefficients and ultimately allowing the out-of-sample prediction of the regressandconditional on observed values of the regressors. The simplest and most widely used version of this model is the normal linear model, in which given is distributed Gaussian. In this model, and under a particular choice of prior probabilities for the parameters—so-called conjugate priors—the posterior can be found analytically. With more arbitrarily chosen priors, the posteriors generally have to be approximated.
Viscoplasticity is a theory in continuum mechanics that describes the rate-dependent inelastic behavior of solids. Rate-dependence in this context means that the deformation of the material depends on the rate at which loads are applied. The inelastic behavior that is the subject of viscoplasticity is plastic deformation which means that the material undergoes unrecoverable deformations when a load level is reached. Rate-dependent plasticity is important for transient plasticity calculations. The main difference between rate-independent plastic and viscoplastic material models is that the latter exhibit not only permanent deformations after the application of loads but continue to undergo a creep flow as a function of time under the influence of the applied load.
Chapman–Enskog theory provides a framework in which equations of hydrodynamics for a gas can be derived from the Boltzmann equation. The technique justifies the otherwise phenomenological constitutive relations appearing in hydrodynamical descriptions such as the Navier–Stokes equations. In doing so, expressions for various transport coefficients such as thermal conductivity and viscosity are obtained in terms of molecular parameters. Thus, Chapman–Enskog theory constitutes an important step in the passage from a microscopic, particle-based description to a continuum hydrodynamical one.
Statistical Football prediction is a method used in sports betting, to predict the outcome of football matches by means of statistical tools. The goal of statistical match prediction is to outperform the predictions of bookmakers, who use them to set odds on the outcome of football matches.
In theoretical physics, relativistic Lagrangian mechanics is Lagrangian mechanics applied in the context of special relativity and general relativity.
In geotechnical engineering, rock mass plasticity is the study of the response of rocks to loads beyond the elastic limit. Historically, conventional wisdom has it that rock is brittle and fails by fracture, while plasticity is identified with ductile materials such as metals. In field-scale rock masses, structural discontinuities exist in the rock indicating that failure has taken place. Since the rock has not fallen apart, contrary to expectation of brittle behavior, clearly elasticity theory is not the last word.
In theoretical physics, the dual graviton is a hypothetical elementary particle that is a dual of the graviton under electric-magnetic duality, as an S-duality, predicted by some formulations of eleven-dimensional supergravity.