Part of a series on 
Economics 



By application 
Notable economists 
Glossary 
Mathematical economics is the application of mathematical methods to represent theories and analyze problems in economics. By convention, these applied methods are beyond simple geometry, such as differential and integral calculus, difference and differential equations, matrix algebra, mathematical programming, and other computational methods.^{ [1] }^{ [2] } Proponents of this approach claim that it allows the formulation of theoretical relationships with rigor, generality, and simplicity.^{ [3] }
Mathematics allows economists to form meaningful, testable propositions about wideranging and complex subjects which could less easily be expressed informally. Further, the language of mathematics allows economists to make specific, positive claims about controversial or contentious subjects that would be impossible without mathematics.^{ [4] } Much of economic theory is currently presented in terms of mathematical economic models, a set of stylized and simplified mathematical relationships asserted to clarify assumptions and implications.^{ [5] }
Broad applications include:
Formal economic modeling began in the 19th century with the use of differential calculus to represent and explain economic behavior, such as utility maximization, an early economic application of mathematical optimization. Economics became more mathematical as a discipline throughout the first half of the 20th century, but introduction of new and generalized techniques in the period around the Second World War, as in game theory, would greatly broaden the use of mathematical formulations in economics.^{ [8] }^{ [7] }
This rapid systematizing of economics alarmed critics of the discipline as well as some noted economists. John Maynard Keynes, Robert Heilbroner, Friedrich Hayek and others have criticized the broad use of mathematical models for human behavior, arguing that some human choices are irreducible to mathematics.
The use of mathematics in the service of social and economic analysis dates back to the 17th century. Then, mainly in German universities, a style of instruction emerged which dealt specifically with detailed presentation of data as it related to public administration. Gottfried Achenwall lectured in this fashion, coining the term statistics. At the same time, a small group of professors in England established a method of "reasoning by figures upon things relating to government" and referred to this practice as Political Arithmetick.^{ [9] } Sir William Petty wrote at length on issues that would later concern economists, such as taxation, Velocity of money and national income, but while his analysis was numerical, he rejected abstract mathematical methodology. Petty's use of detailed numerical data (along with John Graunt) would influence statisticians and economists for some time, even though Petty's works were largely ignored by English scholars.^{ [10] }
The mathematization of economics began in earnest in the 19th century. Most of the economic analysis of the time was what would later be called classical economics. Subjects were discussed and dispensed with through algebraic means, but calculus was not used. More importantly, until Johann Heinrich von Thünen's The Isolated State in 1826, economists did not develop explicit and abstract models for behavior in order to apply the tools of mathematics. Thünen's model of farmland use represents the first example of marginal analysis.^{ [11] } Thünen's work was largely theoretical, but he also mined empirical data in order to attempt to support his generalizations. In comparison to his contemporaries, Thünen built economic models and tools, rather than applying previous tools to new problems.^{ [12] }
Meanwhile, a new cohort of scholars trained in the mathematical methods of the physical sciences gravitated to economics, advocating and applying those methods to their subject,^{ [13] } and described today as moving from geometry to mechanics.^{ [14] } These included W.S. Jevons who presented paper on a "general mathematical theory of political economy" in 1862, providing an outline for use of the theory of marginal utility in political economy.^{ [15] } In 1871, he published The Principles of Political Economy, declaring that the subject as science "must be mathematical simply because it deals with quantities." Jevons expected that only collection of statistics for price and quantities would permit the subject as presented to become an exact science.^{ [16] } Others preceded and followed in expanding mathematical representations of economic problems.
Augustin Cournot and Léon Walras built the tools of the discipline axiomatically around utility, arguing that individuals sought to maximize their utility across choices in a way that could be described mathematically.^{ [17] } At the time, it was thought that utility was quantifiable, in units known as utils.^{ [18] } Cournot, Walras and Francis Ysidro Edgeworth are considered the precursors to modern mathematical economics.^{ [19] }
Cournot, a professor of mathematics, developed a mathematical treatment in 1838 for duopoly—a market condition defined by competition between two sellers.^{ [19] } This treatment of competition, first published in Researches into the Mathematical Principles of Wealth ,^{ [20] } is referred to as Cournot duopoly. It is assumed that both sellers had equal access to the market and could produce their goods without cost. Further, it assumed that both goods were homogeneous. Each seller would vary her output based on the output of the other and the market price would be determined by the total quantity supplied. The profit for each firm would be determined by multiplying their output and the per unit Market price. Differentiating the profit function with respect to quantity supplied for each firm left a system of linear equations, the simultaneous solution of which gave the equilibrium quantity, price and profits.^{ [21] } Cournot's contributions to the mathematization of economics would be neglected for decades, but eventually influenced many of the marginalists.^{ [21] }^{ [22] } Cournot's models of duopoly and Oligopoly also represent one of the first formulations of noncooperative games. Today the solution can be given as a Nash equilibrium but Cournot's work preceded modern game theory by over 100 years.^{ [23] }
While Cournot provided a solution for what would later be called partial equilibrium, Léon Walras attempted to formalize discussion of the economy as a whole through a theory of general competitive equilibrium. The behavior of every economic actor would be considered on both the production and consumption side. Walras originally presented four separate models of exchange, each recursively included in the next. The solution of the resulting system of equations (both linear and nonlinear) is the general equilibrium.^{ [24] } At the time, no general solution could be expressed for a system of arbitrarily many equations, but Walras's attempts produced two famous results in economics. The first is Walras' law and the second is the principle of tâtonnement. Walras' method was considered highly mathematical for the time and Edgeworth commented at length about this fact in his review of Éléments d'économie politique pure (Elements of Pure Economics).^{ [25] }
Walras' law was introduced as a theoretical answer to the problem of determining the solutions in general equilibrium. His notation is different from modern notation but can be constructed using more modern summation notation. Walras assumed that in equilibrium, all money would be spent on all goods: every good would be sold at the market price for that good and every buyer would expend their last dollar on a basket of goods. Starting from this assumption, Walras could then show that if there were n markets and n1 markets cleared (reached equilibrium conditions) that the nth market would clear as well. This is easiest to visualize with two markets (considered in most texts as a market for goods and a market for money). If one of two markets has reached an equilibrium state, no additional goods (or conversely, money) can enter or exit the second market, so it must be in a state of equilibrium as well. Walras used this statement to move toward a proof of existence of solutions to general equilibrium but it is commonly used today to illustrate market clearing in money markets at the undergraduate level.^{ [26] }
Tâtonnement (roughly, French for groping toward) was meant to serve as the practical expression of Walrasian general equilibrium. Walras abstracted the marketplace as an auction of goods where the auctioneer would call out prices and market participants would wait until they could each satisfy their personal reservation prices for the quantity desired (remembering here that this is an auction on all goods, so everyone has a reservation price for their desired basket of goods).^{ [27] }
Only when all buyers are satisfied with the given market price would transactions occur. The market would "clear" at that price—no surplus or shortage would exist. The word tâtonnement is used to describe the directions the market takes in groping toward equilibrium, settling high or low prices on different goods until a price is agreed upon for all goods. While the process appears dynamic, Walras only presented a static model, as no transactions would occur until all markets were in equilibrium. In practice very few markets operate in this manner.^{ [28] }
Edgeworth introduced mathematical elements to Economics explicitly in Mathematical Psychics: An Essay on the Application of Mathematics to the Moral Sciences , published in 1881.^{ [29] } He adopted Jeremy Bentham's felicific calculus to economic behavior, allowing the outcome of each decision to be converted into a change in utility.^{ [30] } Using this assumption, Edgeworth built a model of exchange on three assumptions: individuals are selfinterested, individuals act to maximize utility, and individuals are "free to recontract with another independently of...any third party."^{ [31] }
Given two individuals, the set of solutions where the both individuals can maximize utility is described by the contract curve on what is now known as an Edgeworth Box. Technically, the construction of the twoperson solution to Edgeworth's problem was not developed graphically until 1924 by Arthur Lyon Bowley.^{ [33] } The contract curve of the Edgeworth box (or more generally on any set of solutions to Edgeworth's problem for more actors) is referred to as the core of an economy.^{ [34] }
Edgeworth devoted considerable effort to insisting that mathematical proofs were appropriate for all schools of thought in economics. While at the helm of The Economic Journal , he published several articles criticizing the mathematical rigor of rival researchers, including Edwin Robert Anderson Seligman, a noted skeptic of mathematical economics.^{ [35] } The articles focused on a back and forth over tax incidence and responses by producers. Edgeworth noticed that a monopoly producing a good that had jointness of supply but not jointness of demand (such as first class and economy on an airplane, if the plane flies, both sets of seats fly with it) might actually lower the price seen by the consumer for one of the two commodities if a tax were applied. Common sense and more traditional, numerical analysis seemed to indicate that this was preposterous. Seligman insisted that the results Edgeworth achieved were a quirk of his mathematical formulation. He suggested that the assumption of a continuous demand function and an infinitesimal change in the tax resulted in the paradoxical predictions. Harold Hotelling later showed that Edgeworth was correct and that the same result (a "diminution of price as a result of the tax") could occur with a discontinuous demand function and large changes in the tax rate.^{ [36] }
From the later1930s, an array of new mathematical tools from the differential calculus and differential equations, convex sets, and graph theory were deployed to advance economic theory in a way similar to new mathematical methods earlier applied to physics.^{ [8] }^{ [37] } The process was later described as moving from mechanics to axiomatics.^{ [38] }
Vilfredo Pareto analyzed microeconomics by treating decisions by economic actors as attempts to change a given allotment of goods to another, more preferred allotment. Sets of allocations could then be treated as Pareto efficient (Pareto optimal is an equivalent term) when no exchanges could occur between actors that could make at least one individual better off without making any other individual worse off.^{ [39] } Pareto's proof is commonly conflated with Walrassian equilibrium or informally ascribed to Adam Smith's Invisible hand hypothesis.^{ [40] } Rather, Pareto's statement was the first formal assertion of what would be known as the first fundamental theorem of welfare economics.^{ [41] } These models lacked the inequalities of the next generation of mathematical economics.
In the landmark treatise Foundations of Economic Analysis (1947), Paul Samuelson identified a common paradigm and mathematical structure across multiple fields in the subject, building on previous work by Alfred Marshall. Foundations took mathematical concepts from physics and applied them to economic problems. This broad view (for example, comparing Le Chatelier's principle to tâtonnement) drives the fundamental premise of mathematical economics: systems of economic actors may be modeled and their behavior described much like any other system. This extension followed on the work of the marginalists in the previous century and extended it significantly. Samuelson approached the problems of applying individual utility maximization over aggregate groups with comparative statics, which compares two different equilibrium states after an exogenous change in a variable. This and other methods in the book provided the foundation for mathematical economics in the 20th century.^{ [7] }^{ [42] }
Restricted models of general equilibrium were formulated by John von Neumann in 1937.^{ [43] } Unlike earlier versions, the models of von Neumann had inequality constraints. For his model of an expanding economy, von Neumann proved the existence and uniqueness of an equilibrium using his generalization of Brouwer's fixed point theorem. Von Neumann's model of an expanding economy considered the matrix pencil A  λ B with nonnegative matrices A and B; von Neumann sought probability vectors p and q and a positive number λ that would solve the complementarity equation
along with two inequality systems expressing economic efficiency. In this model, the (transposed) probability vector p represents the prices of the goods while the probability vector q represents the "intensity" at which the production process would run. The unique solution λ represents the rate of growth of the economy, which equals the interest rate. Proving the existence of a positive growth rate and proving that the growth rate equals the interest rate were remarkable achievements, even for von Neumann.^{ [44] }^{ [45] }^{ [46] } Von Neumann's results have been viewed as a special case of linear programming, where von Neumann's model uses only nonnegative matrices.^{ [47] } The study of von Neumann's model of an expanding economy continues to interest mathematical economists with interests in computational economics.^{ [48] }^{ [49] }^{ [50] }
In 1936, the Russian–born economist Wassily Leontief built his model of inputoutput analysis from the 'material balance' tables constructed by Soviet economists, which themselves followed earlier work by the physiocrats. With his model, which described a system of production and demand processes, Leontief described how changes in demand in one economic sector would influence production in another.^{ [51] } In practice, Leontief estimated the coefficients of his simple models, to address economically interesting questions. In production economics, "Leontief technologies" produce outputs using constant proportions of inputs, regardless of the price of inputs, reducing the value of Leontief models for understanding economies but allowing their parameters to be estimated relatively easily. In contrast, the von Neumann model of an expanding economy allows for choice of techniques, but the coefficients must be estimated for each technology.^{ [52] }^{ [53] }
In mathematics, mathematical optimization (or optimization or mathematical programming) refers to the selection of a best element from some set of available alternatives.^{ [54] } In the simplest case, an optimization problem involves maximizing or minimizing a real function by selecting input values of the function and computing the corresponding values of the function. The solution process includes satisfying general necessary and sufficient conditions for optimality. For optimization problems, specialized notation may be used as to the function and its input(s). More generally, optimization includes finding the best available element of some function given a defined domain and may use a variety of different computational optimization techniques.^{ [55] }
Economics is closely enough linked to optimization by agents in an economy that an influential definition relatedly describes economics qua science as the "study of human behavior as a relationship between ends and scarce means" with alternative uses.^{ [56] } Optimization problems run through modern economics, many with explicit economic or technical constraints. In microeconomics, the utility maximization problem and its dual problem, the expenditure minimization problem for a given level of utility, are economic optimization problems.^{ [57] } Theory posits that consumers maximize their utility, subject to their budget constraints and that firms maximize their profits, subject to their production functions, input costs, and market demand.^{ [58] }
Economic equilibrium is studied in optimization theory as a key ingredient of economic theorems that in principle could be tested against empirical data.^{ [7] }^{ [59] } Newer developments have occurred in dynamic programming and modeling optimization with risk and uncertainty, including applications to portfolio theory, the economics of information, and search theory.^{ [58] }
Optimality properties for an entire market system may be stated in mathematical terms, as in formulation of the two fundamental theorems of welfare economics ^{ [60] } and in the Arrow–Debreu model of general equilibrium (also discussed below).^{ [61] } More concretely, many problems are amenable to analytical (formulaic) solution. Many others may be sufficiently complex to require numerical methods of solution, aided by software.^{ [55] } Still others are complex but tractable enough to allow computable methods of solution, in particular computable general equilibrium models for the entire economy.^{ [62] }
Linear and nonlinear programming have profoundly affected microeconomics, which had earlier considered only equality constraints.^{ [63] } Many of the mathematical economists who received Nobel Prizes in Economics had conducted notable research using linear programming: Leonid Kantorovich, Leonid Hurwicz, Tjalling Koopmans, Kenneth J. Arrow, Robert Dorfman, Paul Samuelson and Robert Solow.^{ [64] } Both Kantorovich and Koopmans acknowledged that George B. Dantzig deserved to share their Nobel Prize for linear programming. Economists who conducted research in nonlinear programming also have won the Nobel prize, notably Ragnar Frisch in addition to Kantorovich, Hurwicz, Koopmans, Arrow, and Samuelson.
Linear programming was developed to aid the allocation of resources in firms and in industries during the 1930s in Russia and during the 1940s in the United States. During the Berlin airlift (1948), linear programming was used to plan the shipment of supplies to prevent Berlin from starving after the Soviet blockade.^{ [65] }^{ [66] }
Extensions to nonlinear optimization with inequality constraints were achieved in 1951 by Albert W. Tucker and Harold Kuhn, who considered the nonlinear optimization problem:
In allowing inequality constraints, the Kuhn–Tucker approach generalized the classic method of Lagrange multipliers, which (until then) had allowed only equality constraints.^{ [67] } The Kuhn–Tucker approach inspired further research on Lagrangian duality, including the treatment of inequality constraints.^{ [68] }^{ [69] } The duality theory of nonlinear programming is particularly satisfactory when applied to convex minimization problems, which enjoy the convexanalytic duality theory of Fenchel and Rockafellar; this convex duality is particularly strong for polyhedral convex functions, such as those arising in linear programming. Lagrangian duality and convex analysis are used daily in operations research, in the scheduling of power plants, the planning of production schedules for factories, and the routing of airlines (routes, flights, planes, crews).^{ [69] }
Economic dynamics allows for changes in economic variables over time, including in dynamic systems. The problem of finding optimal functions for such changes is studied in variational calculus and in optimal control theory. Before the Second World War, Frank Ramsey and Harold Hotelling used the calculus of variations to that end.
Following Richard Bellman's work on dynamic programming and the 1962 English translation of L. Pontryagin et al.'s earlier work,^{ [70] } optimal control theory was used more extensively in economics in addressing dynamic problems, especially as to economic growth equilibrium and stability of economic systems,^{ [71] } of which a textbook example is optimal consumption and saving.^{ [72] } A crucial distinction is between deterministic and stochastic control models.^{ [73] } Other applications of optimal control theory include those in finance, inventories, and production for example.^{ [74] }
It was in the course of proving of the existence of an optimal equilibrium in his 1937 model of economic growth that John von Neumann introduced functional analytic methods to include topology in economic theory, in particular, fixedpoint theory through his generalization of Brouwer's fixedpoint theorem.^{ [8] }^{ [43] }^{ [75] } Following von Neumann's program, Kenneth Arrow and Gérard Debreu formulated abstract models of economic equilibria using convex sets and fixed–point theory. In introducing the Arrow–Debreu model in 1954, they proved the existence (but not the uniqueness) of an equilibrium and also proved that every Walras equilibrium is Pareto efficient; in general, equilibria need not be unique.^{ [76] } In their models, the ("primal") vector space represented quantities while the "dual" vector space represented prices.^{ [77] }
In Russia, the mathematician Leonid Kantorovich developed economic models in partially ordered vector spaces, that emphasized the duality between quantities and prices.^{ [78] } Kantorovich renamed prices as "objectively determined valuations" which were abbreviated in Russian as "o. o. o.", alluding to the difficulty of discussing prices in the Soviet Union.^{ [77] }^{ [79] }^{ [80] }
Even in finite dimensions, the concepts of functional analysis have illuminated economic theory, particularly in clarifying the role of prices as normal vectors to a hyperplane supporting a convex set, representing production or consumption possibilities. However, problems of describing optimization over time or under uncertainty require the use of infinite–dimensional function spaces, because agents are choosing among functions or stochastic processes.^{ [77] }^{ [81] }^{ [82] }^{ [83] }
John von Neumann's work on functional analysis and topology broke new ground in mathematics and economic theory.^{ [43] }^{ [84] } It also left advanced mathematical economics with fewer applications of differential calculus. In particular, general equilibrium theorists used general topology, convex geometry, and optimization theory more than differential calculus, because the approach of differential calculus had failed to establish the existence of an equilibrium.
However, the decline of differential calculus should not be exaggerated, because differential calculus has always been used in graduate training and in applications. Moreover, differential calculus has returned to the highest levels of mathematical economics, general equilibrium theory (GET), as practiced by the "GETset" (the humorous designation due to Jacques H. Drèze). In the 1960s and 1970s, however, Gérard Debreu and Stephen Smale led a revival of the use of differential calculus in mathematical economics. In particular, they were able to prove the existence of a general equilibrium, where earlier writers had failed, because of their novel mathematics: Baire category from general topology and Sard's lemma from differential topology. Other economists associated with the use of differential analysis include Egbert Dierker, Andreu MasColell, and Yves Balasko.^{ [85] }^{ [86] } These advances have changed the traditional narrative of the history of mathematical economics, following von Neumann, which celebrated the abandonment of differential calculus.
John von Neumann, working with Oskar Morgenstern on the theory of games, broke new mathematical ground in 1944 by extending functional analytic methods related to convex sets and topological fixedpoint theory to economic analysis.^{ [8] }^{ [84] } Their work thereby avoided the traditional differential calculus, for which the maximum–operator did not apply to nondifferentiable functions. Continuing von Neumann's work in cooperative game theory, game theorists Lloyd S. Shapley, Martin Shubik, Hervé Moulin, Nimrod Megiddo, Bezalel Peleg influenced economic research in politics and economics. For example, research on the fair prices in cooperative games and fair values for voting games led to changed rules for voting in legislatures and for accounting for the costs in public–works projects. For example, cooperative game theory was used in designing the water distribution system of Southern Sweden and for setting rates for dedicated telephone lines in the USA.
Earlier neoclassical theory had bounded only the range of bargaining outcomes and in special cases, for example bilateral monopoly or along the contract curve of the Edgeworth box.^{ [87] } Von Neumann and Morgenstern's results were similarly weak. Following von Neumann's program, however, John Nash used fixed–point theory to prove conditions under which the bargaining problem and noncooperative games can generate a unique equilibrium solution.^{ [88] } Noncooperative game theory has been adopted as a fundamental aspect of experimental economics,^{ [89] } behavioral economics,^{ [90] } information economics,^{ [91] } industrial organization,^{ [92] } and political economy.^{ [93] } It has also given rise to the subject of mechanism design (sometimes called reverse game theory), which has private and publicpolicy applications as to ways of improving economic efficiency through incentives for information sharing.^{ [94] }
In 1994, Nash, John Harsanyi, and Reinhard Selten received the Nobel Memorial Prize in Economic Sciences their work on non–cooperative games. Harsanyi and Selten were awarded for their work on repeated games. Later work extended their results to computational methods of modeling.^{ [95] }
Agentbased computational economics (ACE) as a named field is relatively recent, dating from about the 1990s as to published work. It studies economic processes, including whole economies, as dynamic systems of interacting agents over time. As such, it falls in the paradigm of complex adaptive systems.^{ [96] } In corresponding agentbased models, agents are not real people but "computational objects modeled as interacting according to rules" ... "whose microlevel interactions create emergent patterns" in space and time.^{ [97] } The rules are formulated to predict behavior and social interactions based on incentives and information. The theoretical assumption of mathematical optimization by agents markets is replaced by the less restrictive postulate of agents with bounded rationality adapting to market forces.^{ [98] }
ACE models apply numerical methods of analysis to computerbased simulations of complex dynamic problems for which more conventional methods, such as theorem formulation, may not find ready use.^{ [99] } Starting from specified initial conditions, the computational economic system is modeled as evolving over time as its constituent agents repeatedly interact with each other. In these respects, ACE has been characterized as a bottomup culturedish approach to the study of the economy.^{ [100] } In contrast to other standard modeling methods, ACE events are driven solely by initial conditions, whether or not equilibria exist or are computationally tractable. ACE modeling, however, includes agent adaptation, autonomy, and learning.^{ [101] } It has a similarity to, and overlap with, game theory as an agentbased method for modeling social interactions.^{ [95] } Other dimensions of the approach include such standard economic subjects as competition and collaboration,^{ [102] } market structure and industrial organization,^{ [103] } transaction costs,^{ [104] } welfare economics ^{ [105] } and mechanism design,^{ [94] } information and uncertainty,^{ [106] } and macroeconomics.^{ [107] }^{ [108] }
The method is said to benefit from continuing improvements in modeling techniques of computer science and increased computer capabilities. Issues include those common to experimental economics in general^{ [109] } and by comparison^{ [110] } and to development of a common framework for empirical validation and resolving open questions in agentbased modeling.^{ [111] } The ultimate scientific objective of the method has been described as "test[ing] theoretical findings against realworld data in ways that permit empirically supported theories to cumulate over time, with each researcher's work building appropriately on the work that has gone before."^{ [112] }
Over the course of the 20th century, articles in "core journals"^{ [114] } in economics have been almost exclusively written by economists in academia. As a result, much of the material transmitted in those journals relates to economic theory, and "economic theory itself has been continuously more abstract and mathematical."^{ [115] } A subjective assessment of mathematical techniques^{ [116] } employed in these core journals showed a decrease in articles that use neither geometric representations nor mathematical notation from 95% in 1892 to 5.3% in 1990.^{ [117] } A 2007 survey of ten of the top economic journals finds that only 5.8% of the articles published in 2003 and 2004 both lacked statistical analysis of data and lacked displayed mathematical expressions that were indexed with numbers at the margin of the page.^{ [118] }
Between the world wars, advances in mathematical statistics and a cadre of mathematically trained economists led to econometrics, which was the name proposed for the discipline of advancing economics by using mathematics and statistics. Within economics, "econometrics" has often been used for statistical methods in economics, rather than mathematical economics. Statistical econometrics features the application of linear regression and time series analysis to economic data.
Ragnar Frisch coined the word "econometrics" and helped to found both the Econometric Society in 1930 and the journal Econometrica in 1933.^{ [119] }^{ [120] } A student of Frisch's, Trygve Haavelmo published The Probability Approach in Econometrics in 1944, where he asserted that precise statistical analysis could be used as a tool to validate mathematical theories about economic actors with data from complex sources.^{ [121] } This linking of statistical analysis of systems to economic theory was also promulgated by the Cowles Commission (now the Cowles Foundation) throughout the 1930s and 1940s.^{ [122] }
The roots of modern econometrics can be traced to the American economist Henry L. Moore. Moore studied agricultural productivity and attempted to fit changing values of productivity for plots of corn and other crops to a curve using different values of elasticity. Moore made several errors in his work, some from his choice of models and some from limitations in his use of mathematics. The accuracy of Moore's models also was limited by the poor data for national accounts in the United States at the time. While his first models of production were static, in 1925 he published a dynamic "moving equilibrium" model designed to explain business cycles—this periodic variation from overcorrection in supply and demand curves is now known as the cobweb model. A more formal derivation of this model was made later by Nicholas Kaldor, who is largely credited for its exposition.^{ [123] }
Much of classical economics can be presented in simple geometric terms or elementary mathematical notation. Mathematical economics, however, conventionally makes use of calculus and matrix algebra in economic analysis in order to make powerful claims that would be more difficult without such mathematical tools. These tools are prerequisites for formal study, not only in mathematical economics but in contemporary economic theory in general. Economic problems often involve so many variables that mathematics is the only practical way of attacking and solving them. Alfred Marshall argued that every economic problem which can be quantified, analytically expressed and solved, should be treated by means of mathematical work.^{ [125] }
Economics has become increasingly dependent upon mathematical methods and the mathematical tools it employs have become more sophisticated. As a result, mathematics has become considerably more important to professionals in economics and finance. Graduate programs in both economics and finance require strong undergraduate preparation in mathematics for admission and, for this reason, attract an increasingly high number of mathematicians. Applied mathematicians apply mathematical principles to practical problems, such as economic analysis and other economicsrelated issues, and many economic problems are often defined as integrated into the scope of applied mathematics.^{ [17] }
This integration results from the formulation of economic problems as stylized models with clear assumptions and falsifiable predictions. This modeling may be informal or prosaic, as it was in Adam Smith's The Wealth of Nations , or it may be formal, rigorous and mathematical.
Broadly speaking, formal economic models may be classified as stochastic or deterministic and as discrete or continuous. At a practical level, quantitative modeling is applied to many areas of economics and several methodologies have evolved more or less independently of each other.^{ [126] }
The great appeal of mathematical economics is that it brings a degree of rigor to economic thinking, particularly around charged political topics. For example, during the discussion of the efficacy of a corporate tax cut for increasing the wages of workers, a simple mathematical model proved beneficial to understanding the issues at hand.
As an intellectual exercise, the following problem was posed by Prof. Greg Mankiw of Harvard University:^{ [127] }
An open economy has the production function , where is output per worker and is capital per worker. The capital stock adjusts so that the aftertax marginal product of capital equals the exogenously given world interest rate ...How much will the tax cut increase wages?
To answer this question, we follow John H. Cochrane of the Hoover Institution.^{ [128] } Suppose an open economy has the production function:
Where the variables in this equation are:
The standard choice for the production function is the CobbDouglas production function:
where is the factor of productivity  assumed to be a constant. A corporate tax cut in this model is equivalent to a tax on capital. With taxes, firms look to maximize:
where is the capital tax rate, is wages per worker, and is the exogenous interest rate. Then the firstorder optimality conditions become:
Therefore, the optimality conditions imply that:
Define total taxes . This implies that taxes per worker are:
Then the change in taxes per worker, given the tax rate, is:
To find the change in wages, we differentiate the second optimality condition for the per worker wages to obtain:
Assuming that the interest rate is fixed at , so that , we may differentiate the first optimality condition for the interest rate to find:
For the moment, let's focus only on the static effect of a capital tax cut, so that . If we substitute this equation into equation for wage changes with respect to the tax rate, then we find that:
Therefore, the static effect of a capital tax cut on wages is:
Based on the model, it seems possible that we may achieve a rise in the wage of a worker greater than the amount of the tax cut. But that only considers the static effect, and we know that the dynamic effect must be accounted for. In the dynamic model, we may rewrite the equation for changes in taxes per worker with respect to the tax rate as:
Recalling that , we have that:
Using the CobbDouglas production function, we have that:
Therefore, the dynamic effect of a capital tax cut on wages is:
If we take , then the dynamic effect of lowering capital taxes on wages will be even larger than the static effect. Moreover, if there are positive externalities to capital accumulation, the effect of the tax cut on wages would be larger than in the model we just derived. It is important to note that the result is a combination of:
This result showing that, under certain assumptions, a corporate tax cut can boost the wages of workers by more than the lost revenue does not imply that the magnitude is correct. Rather, it suggests a basis for policy analysis that is not grounded in handwaving. If the assumptions are reasonable, then the model is an acceptable approximation of reality; if they are not, then better models should be developed.
Now let's assume that instead of the CobbDouglas production function we have a more general constant elasticity of substitution (CES) production function:
where ; is the elasticity of substitution between capital and labor. The relevant quantity we want to calculate is , which may be derived as:
Therefore, we may use this to find that:
Therefore, under a general CES model, the dynamic effect of a capital tax cut on wages is:
We recover the CobbDouglas solution when . When , which is the case when perfect substitutes exist, we find that  there is no effect of changes in capital taxes on wages. And when , which is the case when perfect complements exist, we find that  a cut in capital taxes increases wages by exactly one dollar.
According to the Mathematics Subject Classification (MSC), mathematical economics falls into the Applied mathematics/other classification of category 91:
with MSC2010 classifications for 'Game theory' at codes 91Axx and for 'Mathematical economics' at codes 91Bxx.
The Handbook of Mathematical Economics series (Elsevier), currently 4 volumes, distinguishes between mathematical methods in economics, v. 1, Part I, and areas of economics in other volumes where mathematics is employed.^{ [129] }
Another source with a similar distinction is The New Palgrave: A Dictionary of Economics (1987, 4 vols., 1,300 subject entries). In it, a "Subject Index" includes mathematical entries under 2 headings (vol. IV, pp. 982–3):
A widely used system in economics that includes mathematical methods on the subject is the JEL classification codes. It originated in the Journal of Economic Literature for classifying new books and articles. The relevant categories are listed below (simplified below to omit "Miscellaneous" and "Other" JEL codes), as reproduced from JEL classification codes#Mathematical and quantitative methods JEL: C Subcategories. The New Palgrave Dictionary of Economics (2008, 2nd ed.) also uses the JEL codes to classify its entries. The corresponding footnotes below have links to abstracts of The New Palgrave Online for each JEL category (10 or fewer per page, similar to Google searches).
Friedrich Hayek contended that the use of formal techniques projects a scientific exactness that does not appropriately account for informational limitations faced by real economic agents. ^{ [141] }
In an interview in 1999, the economic historian Robert Heilbroner stated:^{ [142] }
I guess the scientific approach began to penetrate and soon dominate the profession in the past twenty to thirty years. This came about in part because of the "invention" of mathematical analysis of various kinds and, indeed, considerable improvements in it. This is the age in which we have not only more data but more sophisticated use of data. So there is a strong feeling that this is a dataladen science and a dataladen undertaking, which, by virtue of the sheer numerics, the sheer equations, and the sheer look of a journal page, bears a certain resemblance to science . . . That one central activity looks scientific. I understand that. I think that is genuine. It approaches being a universal law. But resembling a science is different from being a science.
Heilbroner stated that "some/much of economics is not naturally quantitative and therefore does not lend itself to mathematical exposition."^{ [143] }
Philosopher Karl Popper discussed the scientific standing of economics in the 1940s and 1950s. He argued that mathematical economics suffered from being tautological. In other words, insofar as economics became a mathematical theory, mathematical economics ceased to rely on empirical refutation but rather relied on mathematical proofs and disproof.^{ [144] } According to Popper, falsifiable assumptions can be tested by experiment and observation while unfalsifiable assumptions can be explored mathematically for their consequences and for their consistency with other assumptions.^{ [145] }
Sharing Popper's concerns about assumptions in economics generally, and not just mathematical economics, Milton Friedman declared that "all assumptions are unrealistic". Friedman proposed judging economic models by their predictive performance rather than by the match between their assumptions and reality.^{ [146] }
Considering mathematical economics, J.M. Keynes wrote in The General Theory:^{ [147] }
It is a great fault of symbolic pseudomathematical methods of formalising a system of economic analysis ... that they expressly assume strict independence between the factors involved and lose their cogency and authority if this hypothesis is disallowed; whereas, in ordinary discourse, where we are not blindly manipulating and know all the time what we are doing and what the words mean, we can keep ‘at the back of our heads’ the necessary reserves and qualifications and the adjustments which we shall have to make later on, in a way in which we cannot keep complicated partial differentials ‘at the back’ of several pages of algebra which assume they all vanish. Too large a proportion of recent ‘mathematical’ economics are merely concoctions, as imprecise as the initial assumptions they rest on, which allow the author to lose sight of the complexities and interdependencies of the real world in a maze of pretentious and unhelpful symbols.
In response to these criticisms, Paul Samuelson argued that mathematics is a language, repeating a thesis of Josiah Willard Gibbs. In economics, the language of mathematics is sometimes necessary for representing substantive problems. Moreover, mathematical economics has led to conceptual advances in economics.^{ [148] } In particular, Samuelson gave the example of microeconomics, writing that "few people are ingenious enough to grasp [its] more complex parts... without resorting to the language of mathematics, while most ordinary individuals can do so fairly easily with the aid of mathematics."^{ [149] }
Some economists state that mathematical economics deserves support just like other forms of mathematics, particularly its neighbors in mathematical optimization and mathematical statistics and increasingly in theoretical computer science. Mathematical economics and other mathematical sciences have a history in which theoretical advances have regularly contributed to the reform of the more applied branches of economics. In particular, following the program of John von Neumann, game theory now provides the foundations for describing much of applied economics, from statistical decision theory (as "games against nature") and econometrics to general equilibrium theory and industrial organization. In the last decade, with the rise of the internet, mathematical economists and optimization experts and computer scientists have worked on problems of pricing for online services  their contributions using mathematics from cooperative game theory, nondifferentiable optimization, and combinatorial games.
Robert M. Solow concluded that mathematical economics was the core "infrastructure" of contemporary economics:
Economics is no longer a fit conversation piece for ladies and gentlemen. It has become a technical subject. Like any technical subject it attracts some people who are more interested in the technique than the subject. That is too bad, but it may be inevitable. In any case, do not kid yourself: the technical core of economics is indispensable infrastructure for the political economy. That is why, if you consult [a reference in contemporary economics] looking for enlightenment about the world today, you will be led to technical economics, or history, or nothing at all.^{ [150] }
Prominent mathematical economists include, but are not limited to, the following (by century of birth).
In economics, general equilibrium theory attempts to explain the behavior of supply, demand, and prices in a whole economy with several or many interacting markets, by seeking to prove that the interaction of demand and supply will result in an overall general equilibrium. General equilibrium theory contrasts to the theory of partial equilibrium, which only analyzes single markets.
A density matrix is a matrix that describes the statistical state of a system in quantum mechanics. The probability for any outcome of any welldefined measurement upon a system can be calculated from the density matrix for that system. The extreme points in the set of density matrices are the pure states, which can also be written as state vectors or wavefunctions. Density matrices that are not pure states are mixed states. Any mixed state can be represented as a convex combination of pure states, and so density matrices are helpful for dealing with statistical ensembles of different possible preparations of a quantum system, or situations where a precise preparation is not known, as in quantum statistical mechanics.
In the mathematical field of differential geometry, the Riemann curvature tensor or Riemann–Christoffel tensor is the most common way used to express the curvature of Riemannian manifolds. It assigns a tensor to each point of a Riemannian manifold, that measures the extent to which the metric tensor is not locally isometric to that of Euclidean space. The curvature tensor can also be defined for any pseudoRiemannian manifold, or indeed any manifold equipped with an affine connection.
The step response of a system in a given initial state consists of the time evolution of its outputs when its control inputs are Heaviside step functions. In electronic engineering and control theory, step response is the time behaviour of the outputs of a general system when its inputs change from zero to one in a very short time. The concept can be extended to the abstract mathematical notion of a dynamical system using an evolution parameter.
In the statistical mechanics of quantum mechanical systems and quantum field theory, the properties of a system in thermal equilibrium can be described by a mathematical object called a Kubo–Martin–Schwinger state or, more commonly, a KMS state: a state satisfying the KMS condition. Kubo (1957) introduced the condition, Martin & Schwinger (1959) used it to define thermodynamic Green's functions, and Rudolf Haag, M. Winnink, and N. M. Hugenholtz (1967) used the condition to define equilibrium states and called it the KMS condition.
Toroidal coordinates are a threedimensional orthogonal coordinate system that results from rotating the twodimensional bipolar coordinate system about the axis that separates its two foci. Thus, the two foci and in bipolar coordinates become a ring of radius in the plane of the toroidal coordinate system; the axis is the axis of rotation. The focal ring is also known as the reference circle.
Lattice Boltzmann methods (LBM), originated from the lattice gas automata (LGA) method, is a class of computational fluid dynamics (CFD) methods for fluid simulation. Instead of solving the Navier–Stokes equations directly, a fluid density on a lattice is simulated with streaming and collision (relaxation) processes. The method is versatile as the model fluid can straightforwardly be made to mimic common fluid behaviour like vapour/liquid coexistence, and so fluid systems such as liquid droplets can be simulated. Also, fluids in complex environments such as porous media can be straightforwardly simulated, whereas with complex boundaries other CFD methods can be hard to work with.
In manybody theory, the term Green's function is sometimes used interchangeably with correlation function, but refers specifically to correlators of field operators or creation and annihilation operators.
The Hasse–Davenport relations, introduced by Davenport and Hasse (1935), are two related identities for Gauss sums, one called the Hasse–Davenport lifting relation, and the other called the Hasse–Davenport product relation. The Hasse–Davenport lifting relation is an equality in number theory relating Gauss sums over different fields. Weil (1949) used it to calculate the zeta function of a Fermat hypersurface over a finite field, which motivated the Weil conjectures.
Discrete Morse theory is a combinatorial adaptation of Morse theory developed by Robin Forman. The theory has various practical applications in diverse fields of applied mathematics and computer science, such as configuration spaces, homology computation, denoising, mesh compression, and topological data analysis.
In mathematics, a commutation theorem explicitly identifies the commutant of a specific von Neumann algebra acting on a Hilbert space in the presence of a trace. The first such result was proved by Francis Joseph Murray and John von Neumann in the 1930s and applies to the von Neumann algebra generated by a discrete group or by the dynamical system associated with a measurable transformation preserving a probability measure. Another important application is in the theory of unitary representations of unimodular locally compact groups, where the theory has been applied to the regular representation and other closely related representations. In particular this framework led to an abstract version of the Plancherel theorem for unimodular locally compact groups due to Irving Segal and Forrest Stinespring and an abstract Plancherel theorem for spherical functions associated with a Gelfand pair due to Roger Godement. Their work was put in final form in the 1950s by Jacques Dixmier as part of the theory of Hilbert algebras. It was not until the late 1960s, prompted partly by results in algebraic quantum field theory and quantum statistical mechanics due to the school of Rudolf Haag, that the more general nontracial Tomita–Takesaki theory was developed, heralding a new era in the theory of von Neumann algebras.
In mechanics and geodynamics, a critical taper is the equilibrium angle made by the far end of a wedgeshaped agglomeration of material that is being pushed by the near end. The angle of the critical taper is a function of the material properties within the wedge, pore fluid pressure, and strength of the fault along the base of the wedge.
The multiphase particleincell method (MPPIC) is a numerical method for modeling particlefluid and particleparticle interactions in a computational fluid dynamics (CFD) calculation. The MPPIC method achieves greater stability than its particleincell predecessor by simultaneously treating the solid particles as computational particles and as a continuum. In the MPPIC approach, the particle properties are mapped from the Lagrangian coordinates to an Eulerian grid through the use of interpolation functions. After evaluation of the continuum derivative terms, the particle properties are mapped back to the individual particles. This method has proven to be stable in dense particle flows, computationally efficient, and physically accurate. This has allowed the MPPIC method to be used as particleflow solver for the simulation of industrialscale chemical processes involving particlefluid flows.
In structural dynamics this is the load that changes in time the place to which is applied. Examples: vehicles that pass bridges, trains on the track, guideways, etc. In computational models the load is usually applied as:
In queueing theory, a discipline within the mathematical theory of probability, an M/D/1 queue represents the queue length in a system having a single server, where arrivals are determined by a Poisson process and job service times are fixed (deterministic). The model name is written in Kendall's notation. Agner Krarup Erlang first published on this model in 1909, starting the subject of queueing theory. An extension of this model with more than one server is the M/D/c queue.
In orbital mechanics, Gauss's method is used for preliminary orbit determination from at least three observations of the orbiting body of interest at three different times. The required information are the times of observations, the position vectors of the observation points, the direction cosine vector of the orbiting body from the observation points and general physical data.
In the theory of C*algebras, the universal representation of a C*algebra is a faithful representation which is the direct sum of the GNS representations corresponding to the states of the C*algebra. The various properties of the universal representation are used to obtain information about the ideals and quotients of the C*algebra. The close relationship between an arbitrary representation of a C*algebra and its universal representation can be exploited to obtain several criteria for determining whether a linear functional on the algebra is ultraweakly continuous. The method of using the properties of the universal representation as a tool to prove results about the C*algebra and its representations is commonly referred to as universal representation techniques in the literature.
In mathematics, Katugampola fractional operators are integral operators that generalize the Riemann–Liouville and the Hadamard fractional operators into a unique form. The Katugampola fractional integral generalizes both the Riemann–Liouville fractional integral and the Hadamard fractional integral into a single form and It is also closely related to the Erdelyi–Kober operator that generalizes the Riemann–Liouville fractional integral. Katugampola fractional derivative has been defined using the Katugampola fractional integral and as with any other fractional differential operator, it also extends the possibility of taking real number powers or complex number powers of the integral and differential operators.
A functional differential equation is a differential equation with deviating argument. That is, a functional differential equation is an equation that contains some function and some of its derivatives to different argument values.
Datadriven control systems are a broad family of control systems, in which the identification of the process model and/or the design of the controller are based entirely on experimental data collected from the plant.
date=
(help)date=
(help)date=
(help)date=
(help)Wikiquote has quotations related to: Mathematical economics 
Look up mathematical economics in Wiktionary, the free dictionary. 