Lanchester's laws

Last updated

Lanchester's laws are mathematical formulas for calculating the relative strengths of military forces. The Lanchester equations are differential equations describing the time dependence of two armies' strengths A and B as a function of time, with the function depending only on A and B. [1] [2]

Contents

In 1915 and 1916 during World War I, M. Osipov [3] :vii–viii and Frederick Lanchester independently devised a series of differential equations to demonstrate the power relationships between opposing forces. [4] Among these are what is known as Lanchester's linear law (for ancient combat) and Lanchester's square law (for modern combat with long-range weapons such as firearms).

As of 2017 modified variations of the Lanchester equations continue to form the basis of analysis in many of the US Army’s combat simulations, [5] and in 2016 a RAND Corporation report examined by these laws the probable outcome in the event of a Russian invasion into the Baltic nations of Estonia, Latvia, and Lithuania. [6]

Lanchester's linear law

For ancient combat, between phalanxes of soldiers with spears for example, one soldier could only ever fight exactly one other soldier at a time. If each soldier kills, and is killed by, exactly one other, then the number of soldiers remaining at the end of the battle is simply the difference between the larger army and the smaller, assuming identical weapons.

The linear law also applies to unaimed fire into an enemy-occupied area. The rate of attrition depends on the density of the available targets in the target area as well as the number of weapons shooting. If two forces, occupying the same land area and using the same weapons, shoot randomly into the same target area, they will both suffer the same rate and number of casualties, until the smaller force is eventually eliminated: the greater probability of any one shot hitting the larger force is balanced by the greater number of shots directed at the smaller force.

Lanchester's square law

Lanchester's square law is also known as the N-square law.

Description

Idealized simulation of two forces damaging each other neglecting all other circumstances than the 1) Size of army 2) Rate of damaging. The picture illustrates the principle of Lanchester's square law. Damagerace.JPG
Idealized simulation of two forces damaging each other neglecting all other circumstances than the 1) Size of army 2) Rate of damaging. The picture illustrates the principle of Lanchester's square law.

With firearms engaging each other directly with aimed shooting from a distance, they can attack multiple targets and can receive fire from multiple directions. The rate of attrition now depends only on the number of weapons shooting. Lanchester determined that the power of such a force is proportional not to the number of units it has, but to the square of the number of units. This is known as Lanchester's square law.

More precisely, the law specifies the casualties a shooting force will inflict over a period of time, relative to those inflicted by the opposing force. In its basic form, the law is only useful to predict outcomes and casualties by attrition. It does not apply to whole armies, where tactical deployment means not all troops will be engaged all the time. It only works where each unit (soldier, ship, etc.) can kill only one equivalent unit at a time. For this reason, the law does not apply to machine guns, artillery with unguided munitions, or nuclear weapons. The law requires an assumption that casualties accumulate over time: it does not work in situations in which opposing troops kill each other instantly, either by shooting simultaneously or by one side getting off the first shot and inflicting multiple casualties.

Note that Lanchester's square law does not apply to technological force, only numerical force; so it requires an N-squared-fold increase in quality to compensate for an N-fold decrease in quantity.

Example equations

Suppose that two armies, Red and Blue, are engaging each other in combat. Red is shooting a continuous stream of bullets at Blue. Meanwhile, Blue is shooting a continuous stream of bullets at Red.

Let symbol A represent the number of soldiers in the Red force. Each one has offensive firepower α, which is the number of enemy soldiers it can incapacitate (e.g., kill or injure) per unit time. Likewise, Blue has B soldiers, each with offensive firepower β.

Lanchester's square law calculates the number of soldiers lost on each side using the following pair of equations. [7] Here, dA/dt represents the rate at which the number of Red soldiers is changing at a particular instant. A negative value indicates the loss of soldiers. Similarly, dB/dt represents the rate of change of the number of Blue soldiers.

The solution to these equations shows that:

The first three of these conclusions are obvious. The final one is the origin of the name "square law".

Relation to the salvo combat model

Lanchester's equations are related to the more recent salvo combat model equations, with two main differences.

First, Lanchester's original equations form a continuous time model, whereas the basic salvo equations form a discrete time model. In a gun battle, bullets or shells are typically fired in large quantities. Each round has a relatively low chance of hitting its target, and does a relatively small amount of damage. Therefore, Lanchester's equations model gunfire as a stream of firepower that continuously weakens the enemy force over time.

By comparison, cruise missiles typically are fired in relatively small quantities. Each one has a high probability of hitting its target, and carries a relatively powerful warhead. Therefore, it makes more sense to model them as a discrete pulse (or salvo) of firepower in a discrete time model.

Second, Lanchester's equations include only offensive firepower, whereas the salvo equations also include defensive firepower. Given their small size and large number, it is not practical to intercept bullets and shells in a gun battle. By comparison, cruise missiles can be intercepted (shot down) by surface-to-air missiles and anti-aircraft guns. Therefore, missile combat models include those active defenses.

Lanchester's law in use

Lanchester's laws have been used to model historical battles for research purposes. Examples include Pickett's Charge of Confederate infantry against Union infantry during the 1863 Battle of Gettysburg, [8] the 1940 Battle of Britain between the British and German air forces, [9] and the Battle of Kursk. [10]

In modern warfare, to take into account that to some extent both linear and the square apply often, an exponent of 1.5 is used. [11] [12] [3] :7-5–7-8 Lanchester's laws have also been used to model guerrilla warfare. [13] The laws have also been applied to repeat battles with a range of inter-battle reinforcement strategies. [14]

Attempts have been made to apply Lanchester's laws to conflicts between animal groups. [15] Examples include tests with chimpanzees [16] and ants. The chimpanzee application was relatively successful. A study of Australian meat ants and Argentine ants confirmed the square law, [17] but a study of fire ants did not confirm the square law. [18]

Helmbold Parameters

The Helmbold Parameters provide quick, concise, exact numerical indices, soundly based on historical data, for comparing battles with respect to their bitterness and the degree to which side had the advantage. While their definition is modeled after a solution of the Lanchester Square Law's differential equations, their numerical values are based entirely on the initial and final strengths of the opponents and in no way depend upon the validity of Lanchester's Square Law as a model of attrition during the course of a battle.

The solution of Lanchester's Square Law used here can be written as:

Where:

If the initial and final strengths of the two sides are known it is possible to solve for the parameters , , , and . If the battle duration is also known, then it is possible to solve for . [19] [20] [21]

If, as is normally the case, is small enough that the hyperbolic functions can, without any significant error, be replaced by their series expansion up to terms in the first power of , and if abbreviations adopted for the casualty fractions are and , then the approximate relations that hold include and . [22] That is a kind of "average" (specifically, the geometric mean) of the casualty fractions justifies using it as an index of the bitterness of the battle.

Statistical work prefers natural logarithms of the Helmbold Parameters. They are noted , , and .

Major findings

See Helmbold (2021):

  1. The Helmbold parameters and are statistically independent, i.e., they measure distinct features of a battle. [23]
  2. The probability that the defender wins, , is related to the defender's advantage parameter via the logistic function, , with . [24] This logistic function is almost exactly skew-symmetric about , rising from at , through at , to at . Because the probability of victory depends on the Helmbold advantage parameter rather than the force ratio, it is clear that force ratio is an inferior and untrustworthy predictor of victory in battle.
  3. While the defender's advantage varies widely from one battle to the next, on average it has been practically constant since 1600 CE. [25]
  4. Most of the other battle parameters (specifically the initial force strengths, initial force ratios, casualty numbers, casualty exchange ratios, battle durations, and distances advanced by the attacker) have changed so slowly since 1600 CE that only the most acute observers would be likely to notice any change over their nominal 50-year military career. [26]
  5. Bitterness (), casualty fractions ( and in the above notation), and intensity () also changed slowly before 1939 CE. But since then they have followed a startlingly steeper declining curve. [27]

Some observers have noticed a similar post-WWII decline in casualties at the level of wars instead of battles. [28] [29] [30] [31]

See also

Related Research Articles

Linear elasticity is a mathematical model as to how solid objects deform and become internally stressed by prescribed loading conditions. It is a simplification of the more general nonlinear theory of elasticity and a branch of continuum mechanics.

A Newtonian fluid is a fluid in which the viscous stresses arising from its flow are at every point linearly correlated to the local strain rate — the rate of change of its deformation over time. Stresses are proportional to the rate of change of the fluid's velocity vector.

In the general theory of relativity, the Einstein field equations relate the geometry of spacetime to the distribution of matter within it.

<span class="mw-page-title-main">Granular material</span> Conglomeration of discrete solid, macroscopic particles

A granular material is a conglomeration of discrete solid, macroscopic particles characterized by a loss of energy whenever the particles interact. The constituents that compose granular material are large enough such that they are not subject to thermal motion fluctuations. Thus, the lower size limit for grains in granular material is about 1 μm. On the upper size limit, the physics of granular materials may be applied to ice floes where the individual grains are icebergs and to asteroid belts of the Solar System with individual grains being asteroids.

Viscosity depends strongly on temperature. In liquids it usually decreases with increasing temperature, whereas, in most gases, viscosity increases with increasing temperature. This article discusses several models of this dependence, ranging from rigorous first-principles calculations for monatomic gases, to empirical correlations for liquids.

A neo-Hookean solid is a hyperelastic material model, similar to Hooke's law, that can be used for predicting the nonlinear stress–strain behavior of materials undergoing large deformations. The model was proposed by Ronald Rivlin in 1948 using invariants, though Mooney had already described a version in stretch form in 1940, and Wall had noted the equivalence in shear with the Hooke model in 1942.

In electromagnetics, the antenna factor is defined as the ratio of the electric field E to the voltage V induced across the terminals of an antenna:

<span class="mw-page-title-main">Maxwell stress tensor</span> Mathematical description in electromagnetism

The Maxwell stress tensor is a symmetric second-order tensor in three dimensions that is used in classical electromagnetism to represent the interaction between electromagnetic forces and mechanical momentum. In simple situations, such as a point charge moving freely in a homogeneous magnetic field, it is easy to calculate the forces on the charge from the Lorentz force law. When the situation becomes more complicated, this ordinary procedure can become impractically difficult, with equations spanning multiple lines. It is therefore convenient to collect many of these terms in the Maxwell stress tensor, and to use tensor arithmetic to find the answer to the problem at hand.

In multivariate statistics, if is a vector of random variables, and is an -dimensional symmetric matrix, then the scalar quantity is known as a quadratic form in .

<span class="mw-page-title-main">Covariant formulation of classical electromagnetism</span> Ways of writing certain laws of physics

The covariant formulation of classical electromagnetism refers to ways of writing the laws of classical electromagnetism in a form that is manifestly invariant under Lorentz transformations, in the formalism of special relativity using rectilinear inertial coordinate systems. These expressions both make it simple to prove that the laws of classical electromagnetism take the same form in any inertial coordinate system, and also provide a way to translate the fields and forces from one frame to another. However, this is not as general as Maxwell's equations in curved spacetime or non-rectilinear coordinate systems.

<span class="mw-page-title-main">Vertex model</span>

A vertex model is a type of statistical mechanics model in which the Boltzmann weights are associated with a vertex in the model. This contrasts with a nearest-neighbour model, such as the Ising model, in which the energy, and thus the Boltzmann weight of a statistical microstate is attributed to the bonds connecting two neighbouring particles. The energy associated with a vertex in the lattice of particles is thus dependent on the state of the bonds which connect it to adjacent vertices. It turns out that every solution of the Yang–Baxter equation with spectral parameters in a tensor product of vector spaces yields an exactly-solvable vertex model.

The Newman–Penrose (NP) formalism is a set of notation developed by Ezra T. Newman and Roger Penrose for general relativity (GR). Their notation is an effort to treat general relativity in terms of spinor notation, which introduces complex forms of the usual variables used in GR. The NP formalism is itself a special case of the tetrad formalism, where the tensors of the theory are projected onto a complete vector basis at each point in spacetime. Usually this vector basis is chosen to reflect some symmetry of the spacetime, leading to simplified expressions for physical observables. In the case of the NP formalism, the vector basis chosen is a null tetrad: a set of four null vectors—two real, and a complex-conjugate pair. The two real members often asymptotically point radially inward and radially outward, and the formalism is well adapted to treatment of the propagation of radiation in curved spacetime. The Weyl scalars, derived from the Weyl tensor, are often used. In particular, it can be shown that one of these scalars— in the appropriate frame—encodes the outgoing gravitational radiation of an asymptotically flat system.

In electrical engineering, dielectric loss quantifies a dielectric material's inherent dissipation of electromagnetic energy. It can be parameterized in terms of either the loss angleδ or the corresponding loss tangenttan(δ). Both refer to the phasor in the complex plane whose real and imaginary parts are the resistive (lossy) component of an electromagnetic field and its reactive (lossless) counterpart.

Bayesian linear regression is a type of conditional modeling in which the mean of one variable is described by a linear combination of other variables, with the goal of obtaining the posterior probability of the regression coefficients and ultimately allowing the out-of-sample prediction of the regressandconditional on observed values of the regressors. The simplest and most widely used version of this model is the normal linear model, in which given is distributed Gaussian. In this model, and under a particular choice of prior probabilities for the parameters—so-called conjugate priors—the posterior can be found analytically. With more arbitrarily chosen priors, the posteriors generally have to be approximated.

<span class="mw-page-title-main">Viscoplasticity</span> Theory in continuum mechanics

Viscoplasticity is a theory in continuum mechanics that describes the rate-dependent inelastic behavior of solids. Rate-dependence in this context means that the deformation of the material depends on the rate at which loads are applied. The inelastic behavior that is the subject of viscoplasticity is plastic deformation which means that the material undergoes unrecoverable deformations when a load level is reached. Rate-dependent plasticity is important for transient plasticity calculations. The main difference between rate-independent plastic and viscoplastic material models is that the latter exhibit not only permanent deformations after the application of loads but continue to undergo a creep flow as a function of time under the influence of the applied load.

Chapman–Enskog theory provides a framework in which equations of hydrodynamics for a gas can be derived from the Boltzmann equation. The technique justifies the otherwise phenomenological constitutive relations appearing in hydrodynamical descriptions such as the Navier–Stokes equations. In doing so, expressions for various transport coefficients such as thermal conductivity and viscosity are obtained in terms of molecular parameters. Thus, Chapman–Enskog theory constitutes an important step in the passage from a microscopic, particle-based description to a continuum hydrodynamical one.

Statistical Football prediction is a method used in sports betting, to predict the outcome of football matches by means of statistical tools. The goal of statistical match prediction is to outperform the predictions of bookmakers, who use them to set odds on the outcome of football matches.

<span class="mw-page-title-main">Relativistic Lagrangian mechanics</span> Mathematical formulation of special and general relativity

In theoretical physics, relativistic Lagrangian mechanics is Lagrangian mechanics applied in the context of special relativity and general relativity.

<span class="mw-page-title-main">Rock mass plasticity</span> Study of irreversible deformation of rock

In geotechnical engineering, rock mass plasticity is the study of the response of rocks to loads beyond the elastic limit. Historically, conventional wisdom has it that rock is brittle and fails by fracture, while plasticity is identified with ductile materials such as metals. In field-scale rock masses, structural discontinuities exist in the rock indicating that failure has taken place. Since the rock has not fallen apart, contrary to expectation of brittle behavior, clearly elasticity theory is not the last word.

<span class="mw-page-title-main">Dual graviton</span> Hypothetical particle found in supergravity

In theoretical physics, the dual graviton is a hypothetical elementary particle that is a dual of the graviton under electric-magnetic duality, as an S-duality, predicted by some formulations of eleven-dimensional supergravity.

References

  1. Lanchester F.W., Mathematics in Warfare in The World of Mathematics, Vol. 4 (1956) Ed. Newman, J.R., Simon and Schuster, 2138–2157; anthologised from Aircraft in Warfare (1916)
  2. Davis, Paul K. (1995). "Lanchester Equations and Scoring Systems". Aggregation, Disaggregation, and the 3:1 Rules in Ground Combat. Rand Corporation. doi:10.7249/MR638.
  3. 1 2 Osipov, M. (1991) [1915]. "The Influence of the Numerical Strength of Engaged Forces on Their Casualties" Влияние Численности Сражающихся Сторонъ На Ихъ Потери (PDF). Tsarist Russian Journal Military CollectionВоенный Сборник. Translated by Helmbold, Robert; Rehm, Allan. US Army Concepts Analysis Agency. Archived (PDF) from the original on 4 November 2021. Retrieved 23 January 2022.
  4. Wrigge, Staffan; Fransen, Ame; Wigg, Lars (September 1995). "The Lanchester Theory of Combat and Some Related Subjects" (PDF). FORSVARETS FORSKNINGSANSTALT.
  5. Christian, MAJ Joshua T. (23 May 2019). An Examination of Force Ratios (PDF). Fort Leavenworth, KS: US Army Command and General Staff College.PD-icon.svg This article incorporates public domain material from websites or documents of the United States Army .
  6. David A. Shlapak, and Michael W. Johnson, Reinforcing Deterrence on NATO’s Eastern Flank (Santa Monica, CA: RAND Corporation, 2016)
  7. Taylor JG. 1983. Lanchester Models of Warfare, volumes I & II. Operations Research Society of America.
  8. Armstrong MJ, Sodergren SE, 2015, Refighting Pickett's Charge: mathematical modeling of the Civil War battlefield, Social Science Quarterly.
  9. MacKay N, Price C, 2011, Safety in Numbers: Ideas of concentration in Royal Air Force fighter defence from Lanchester to the Battle of Britain, History 96, 304–325.
  10. Lucas, Thomas W.; Turkes, Turker (2004). "Fitting Lanchester equations to the battles of Kursk and Ardennes: Lanchester Equations for the Battles of Kursk and Ardennes". Naval Research Logistics (NRL). 51 (1): 95–116. doi:10.1002/nav.10101. hdl: 10945/44169 . S2CID   4809135.
  11. Race to the Swift: Thoughts on Twenty-First Century Warfare by Richard E. Simpkin
  12. FOWLER, CHARLES A. "BERT" (1 March 2006). "Asymmetric Warfare: A Primer".
  13. Deitchman, S. J. (1962). "A Lanchester Model of Guerrilla Warfare". Operations Research. 10 (6): 818–827. doi:10.1287/opre.10.6.818. ISSN   0030-364X. JSTOR   168104.
  14. McCartney, M (2022). "The solution of Lanchester's equations with inter-battle reinforcement strategies". Physica A. 586 (1): 1–9. doi:10.1016/j.physa.2021.126477. ISSN   0378-4371.
  15. Clifton, E. (2020). A Brief Review on the Application of Lanchester's Models of Combat in Nonhuman Animals. Ecological Psychology, 32, 181-191. doi:10.1080/10407413.2020.1846456
  16. Wilson, M. L., Britton, N. F., & Franks, N. R. (2002). Chimpanzees and the mathematics of battle. Proceedings of the Royal Society B: Biological Sciences, 269, 1107-1112. doi:10.1098/rspb.2001.1926
  17. Lymbery, Samuel J. (2023). "Complex battlefields favor strong soldiers over large armies in social animal warfare". PNAS. 120 (37): e2217973120. Bibcode:2023PNAS..12017973L. doi:10.1073/pnas.2217973120. PMC   10500280 . PMID   37639613 . Retrieved 18 September 2023.
  18. Plowes, N. J. R., & Adams, E. S. (2005). An empirical test of Lanchester's square law: mortality during battles of the fire ant Solenopsis invicta. Proceedings of the Royal Society B: Biological Sciences, 272, 1809-1814. doi:10.1098/rspb.2005.3162
  19. Helmbold 1961a.
  20. Helmbold 1961b.
  21. Helmbold 2021, pp. app A.
  22. Helmbold 2021, pp. 14–16, app A.
  23. Helmbold 2021, pp. 18–19.
  24. Helmbold 2021, pp. 17–18.
  25. Helmbold 2021, pp. 20, 68–69.
  26. Helmbold 2021, pp. 20, app C.
  27. Helmbold 2021, pp. 21, app C part 4.
  28. Lacina, Bethany & Nils Petter Gleditsch (2005) "Monitoring Trends in Flobal Combat: A New Dataset of Battle Deaths", Journal of Population (2005) 21:145-166
  29. Lacina, Bethany, Nils Petter Gleditsch, & Bruce Russett (2006) "The Declining Risk of Death in Battle", International Studies Quyarterly 50(3), 673-680
  30. Lacina, Bethany & Nils Petter Gleditsch, (2012) Journal of Conflict Resolution 57(6) 1109-1127
  31. Lacina, Bethany & Nils Petter Gleditsch, (2012) "The Waning of War Is Real: A Response to Gohdes and Price", Journal of Conflict Resolution

Bibliography