Mathematical economics

Last updated

Mathematical economics is the application of mathematical methods to represent theories and analyze problems in economics. Often, these applied methods are beyond simple geometry, and may include differential and integral calculus, difference and differential equations, matrix algebra, mathematical programming, or other computational methods. [1] [2] Proponents of this approach claim that it allows the formulation of theoretical relationships with rigor, generality, and simplicity. [3]

Contents

Mathematics allows economists to form meaningful, testable propositions about wide-ranging and complex subjects which could less easily be expressed informally. Further, the language of mathematics allows economists to make specific, positive claims about controversial or contentious subjects that would be impossible without mathematics. [4] Much of economic theory is currently presented in terms of mathematical economic models, a set of stylized and simplified mathematical relationships asserted to clarify assumptions and implications. [5]

Broad applications include:

Formal economic modeling began in the 19th century with the use of differential calculus to represent and explain economic behavior, such as utility maximization, an early economic application of mathematical optimization. Economics became more mathematical as a discipline throughout the first half of the 20th century, but introduction of new and generalized techniques in the period around the Second World War, as in game theory, would greatly broaden the use of mathematical formulations in economics. [8] [7]

This rapid systematizing of economics alarmed critics of the discipline as well as some noted economists. John Maynard Keynes, Robert Heilbroner, Friedrich Hayek and others have criticized the broad use of mathematical models for human behavior, arguing that some human choices are irreducible to mathematics.

History

The use of mathematics in the service of social and economic analysis dates back to the 17th century. Then, mainly in German universities, a style of instruction emerged which dealt specifically with detailed presentation of data as it related to public administration. Gottfried Achenwall lectured in this fashion, coining the term statistics. At the same time, a small group of professors in England established a method of "reasoning by figures upon things relating to government" and referred to this practice as Political Arithmetick. [9] Sir William Petty wrote at length on issues that would later concern economists, such as taxation, Velocity of money and national income, but while his analysis was numerical, he rejected abstract mathematical methodology. Petty's use of detailed numerical data (along with John Graunt) would influence statisticians and economists for some time, even though Petty's works were largely ignored by English scholars. [10]

The mathematization of economics began in earnest in the 19th century. Most of the economic analysis of the time was what would later be called classical economics. Subjects were discussed and dispensed with through algebraic means, but calculus was not used. More importantly, until Johann Heinrich von Thünen's The Isolated State in 1826, economists did not develop explicit and abstract models for behavior in order to apply the tools of mathematics. Thünen's model of farmland use represents the first example of marginal analysis. [11] Thünen's work was largely theoretical, but he also mined empirical data in order to attempt to support his generalizations. In comparison to his contemporaries, Thünen built economic models and tools, rather than applying previous tools to new problems. [12]

Meanwhile, a new cohort of scholars trained in the mathematical methods of the physical sciences gravitated to economics, advocating and applying those methods to their subject, [13] and described today as moving from geometry to mechanics. [14] These included W.S. Jevons who presented a paper on a "general mathematical theory of political economy" in 1862, providing an outline for use of the theory of marginal utility in political economy. [15] In 1871, he published The Principles of Political Economy, declaring that the subject as science "must be mathematical simply because it deals with quantities". Jevons expected that only collection of statistics for price and quantities would permit the subject as presented to become an exact science. [16] Others preceded and followed in expanding mathematical representations of economic problems. [17]

Marginalists and the roots of neoclassical economics

Equilibrium quantities as a solution to two reaction functions in Cournot duopoly. Each reaction function is expressed as a linear equation dependent upon quantity demanded. Economics cournot diag4 svg.svg
Equilibrium quantities as a solution to two reaction functions in Cournot duopoly. Each reaction function is expressed as a linear equation dependent upon quantity demanded.

Augustin Cournot and Léon Walras built the tools of the discipline axiomatically around utility, arguing that individuals sought to maximize their utility across choices in a way that could be described mathematically. [18] At the time, it was thought that utility was quantifiable, in units known as utils. [19] Cournot, Walras and Francis Ysidro Edgeworth are considered the precursors to modern mathematical economics. [20]

Augustin Cournot

Cournot, a professor of mathematics, developed a mathematical treatment in 1838 for duopoly—a market condition defined by competition between two sellers. [20] This treatment of competition, first published in Researches into the Mathematical Principles of Wealth, [21] is referred to as Cournot duopoly. It is assumed that both sellers had equal access to the market and could produce their goods without cost. Further, it assumed that both goods were homogeneous. Each seller would vary her output based on the output of the other and the market price would be determined by the total quantity supplied. The profit for each firm would be determined by multiplying their output by the per unit market price. Differentiating the profit function with respect to quantity supplied for each firm left a system of linear equations, the simultaneous solution of which gave the equilibrium quantity, price and profits. [22] Cournot's contributions to the mathematization of economics would be neglected for decades, but eventually influenced many of the marginalists. [22] [23] Cournot's models of duopoly and oligopoly also represent one of the first formulations of non-cooperative games. Today the solution can be given as a Nash equilibrium but Cournot's work preceded modern game theory by over 100 years. [24]

Léon Walras

While Cournot provided a solution for what would later be called partial equilibrium, Léon Walras attempted to formalize discussion of the economy as a whole through a theory of general competitive equilibrium. The behavior of every economic actor would be considered on both the production and consumption side. Walras originally presented four separate models of exchange, each recursively included in the next. The solution of the resulting system of equations (both linear and non-linear) is the general equilibrium. [25] At the time, no general solution could be expressed for a system of arbitrarily many equations, but Walras's attempts produced two famous results in economics. The first is Walras' law and the second is the principle of tâtonnement. Walras' method was considered highly mathematical for the time and Edgeworth commented at length about this fact in his review of Éléments d'économie politique pure (Elements of Pure Economics). [26]

Walras' law was introduced as a theoretical answer to the problem of determining the solutions in general equilibrium. His notation is different from modern notation but can be constructed using more modern summation notation. Walras assumed that in equilibrium, all money would be spent on all goods: every good would be sold at the market price for that good and every buyer would expend their last dollar on a basket of goods. Starting from this assumption, Walras could then show that if there were n markets and n-1 markets cleared (reached equilibrium conditions) that the nth market would clear as well. This is easiest to visualize with two markets (considered in most texts as a market for goods and a market for money). If one of two markets has reached an equilibrium state, no additional goods (or conversely, money) can enter or exit the second market, so it must be in a state of equilibrium as well. Walras used this statement to move toward a proof of existence of solutions to general equilibrium but it is commonly used today to illustrate market clearing in money markets at the undergraduate level. [27]

Tâtonnement (roughly, French for groping toward) was meant to serve as the practical expression of Walrasian general equilibrium. Walras abstracted the marketplace as an auction of goods where the auctioneer would call out prices and market participants would wait until they could each satisfy their personal reservation prices for the quantity desired (remembering here that this is an auction on all goods, so everyone has a reservation price for their desired basket of goods). [28]

Only when all buyers are satisfied with the given market price would transactions occur. The market would "clear" at that price—no surplus or shortage would exist. The word tâtonnement is used to describe the directions the market takes in groping toward equilibrium, settling high or low prices on different goods until a price is agreed upon for all goods. While the process appears dynamic, Walras only presented a static model, as no transactions would occur until all markets were in equilibrium. In practice, very few markets operate in this manner. [29]

Francis Ysidro Edgeworth

Edgeworth introduced mathematical elements to Economics explicitly in Mathematical Psychics: An Essay on the Application of Mathematics to the Moral Sciences, published in 1881. [30] He adopted Jeremy Bentham's felicific calculus to economic behavior, allowing the outcome of each decision to be converted into a change in utility. [31] Using this assumption, Edgeworth built a model of exchange on three assumptions: individuals are self-interested, individuals act to maximize utility, and individuals are "free to recontract with another independently of...any third party". [32]

An Edgeworth box displaying the contract curve on an economy with two participants. Referred to as the "core" of the economy in modern parlance, there are infinitely many solutions along the curve for economies with two participants Contract-curve-on-edgeworth-box.svg
An Edgeworth box displaying the contract curve on an economy with two participants. Referred to as the "core" of the economy in modern parlance, there are infinitely many solutions along the curve for economies with two participants

Given two individuals, the set of solutions where both individuals can maximize utility is described by the contract curve on what is now known as an Edgeworth Box. Technically, the construction of the two-person solution to Edgeworth's problem was not developed graphically until 1924 by Arthur Lyon Bowley. [34] The contract curve of the Edgeworth box (or more generally on any set of solutions to Edgeworth's problem for more actors) is referred to as the core of an economy. [35]

Edgeworth devoted considerable effort to insisting that mathematical proofs were appropriate for all schools of thought in economics. While at the helm of The Economic Journal , he published several articles criticizing the mathematical rigor of rival researchers, including Edwin Robert Anderson Seligman, a noted skeptic of mathematical economics. [36] The articles focused on a back and forth over tax incidence and responses by producers. Edgeworth noticed that a monopoly producing a good that had jointness of supply but not jointness of demand (such as first class and economy on an airplane, if the plane flies, both sets of seats fly with it) might actually lower the price seen by the consumer for one of the two commodities if a tax were applied. Common sense and more traditional, numerical analysis seemed to indicate that this was preposterous. Seligman insisted that the results Edgeworth achieved were a quirk of his mathematical formulation. He suggested that the assumption of a continuous demand function and an infinitesimal change in the tax resulted in the paradoxical predictions. Harold Hotelling later showed that Edgeworth was correct and that the same result (a "diminution of price as a result of the tax") could occur with a discontinuous demand function and large changes in the tax rate. [37]

Modern mathematical economics

From the later-1930s, an array of new mathematical tools from the differential calculus and differential equations, convex sets, and graph theory were deployed to advance economic theory in a way similar to new mathematical methods earlier applied to physics. [8] [38] The process was later described as moving from mechanics to axiomatics. [39]

Differential calculus

Vilfredo Pareto analyzed microeconomics by treating decisions by economic actors as attempts to change a given allotment of goods to another, more preferred allotment. Sets of allocations could then be treated as Pareto efficient (Pareto optimal is an equivalent term) when no exchanges could occur between actors that could make at least one individual better off without making any other individual worse off. [40] Pareto's proof is commonly conflated with Walrassian equilibrium or informally ascribed to Adam Smith's Invisible hand hypothesis. [41] Rather, Pareto's statement was the first formal assertion of what would be known as the first fundamental theorem of welfare economics. [42] These models lacked the inequalities of the next generation of mathematical economics.

In the landmark treatise Foundations of Economic Analysis (1947), Paul Samuelson identified a common paradigm and mathematical structure across multiple fields in the subject, building on previous work by Alfred Marshall. Foundations took mathematical concepts from physics and applied them to economic problems. This broad view (for example, comparing Le Chatelier's principle to tâtonnement) drives the fundamental premise of mathematical economics: systems of economic actors may be modeled and their behavior described much like any other system. This extension followed on the work of the marginalists in the previous century and extended it significantly. Samuelson approached the problems of applying individual utility maximization over aggregate groups with comparative statics, which compares two different equilibrium states after an exogenous change in a variable. This and other methods in the book provided the foundation for mathematical economics in the 20th century. [7] [43]

Linear models

Restricted models of general equilibrium were formulated by John von Neumann in 1937. [44] Unlike earlier versions, the models of von Neumann had inequality constraints. For his model of an expanding economy, von Neumann proved the existence and uniqueness of an equilibrium using his generalization of Brouwer's fixed point theorem. Von Neumann's model of an expanding economy considered the matrix pencil  A - λ B with nonnegative matrices A and B; von Neumann sought probability vectors  p and q and a positive number λ that would solve the complementarity equation

pT (Aλ B) q = 0,

along with two inequality systems expressing economic efficiency. In this model, the (transposed) probability vector p represents the prices of the goods while the probability vector q represents the "intensity" at which the production process would run. The unique solution λ represents the rate of growth of the economy, which equals the interest rate. Proving the existence of a positive growth rate and proving that the growth rate equals the interest rate were remarkable achievements, even for von Neumann. [45] [46] [47] Von Neumann's results have been viewed as a special case of linear programming, where von Neumann's model uses only nonnegative matrices. [48] The study of von Neumann's model of an expanding economy continues to interest mathematical economists with interests in computational economics. [49] [50] [51]

Input-output economics

In 1936, the Russian–born economist Wassily Leontief built his model of input-output analysis from the 'material balance' tables constructed by Soviet economists, which themselves followed earlier work by the physiocrats. With his model, which described a system of production and demand processes, Leontief described how changes in demand in one economic sector would influence production in another. [52] In practice, Leontief estimated the coefficients of his simple models, to address economically interesting questions. In production economics, "Leontief technologies" produce outputs using constant proportions of inputs, regardless of the price of inputs, reducing the value of Leontief models for understanding economies but allowing their parameters to be estimated relatively easily. In contrast, the von Neumann model of an expanding economy allows for choice of techniques, but the coefficients must be estimated for each technology. [53] [54]

Mathematical optimization

Red dot in z direction as maximum for paraboloid function of (x, y) inputs MaximumParaboloid.png
Red dot in z direction as maximum for paraboloid function of (x, y) inputs

In mathematics, mathematical optimization (or optimization or mathematical programming) refers to the selection of a best element from some set of available alternatives. [55] In the simplest case, an optimization problem involves maximizing or minimizing a real function by selecting input values of the function and computing the corresponding values of the function. The solution process includes satisfying general necessary and sufficient conditions for optimality. For optimization problems, specialized notation may be used as to the function and its input(s). More generally, optimization includes finding the best available element of some function given a defined domain and may use a variety of different computational optimization techniques. [56]

Economics is closely enough linked to optimization by agents in an economy that an influential definition relatedly describes economics qua science as the "study of human behavior as a relationship between ends and scarce means" with alternative uses. [57] Optimization problems run through modern economics, many with explicit economic or technical constraints. In microeconomics, the utility maximization problem and its dual problem, the expenditure minimization problem for a given level of utility, are economic optimization problems. [58] Theory posits that consumers maximize their utility, subject to their budget constraints and that firms maximize their profits, subject to their production functions, input costs, and market demand. [59]

Economic equilibrium is studied in optimization theory as a key ingredient of economic theorems that in principle could be tested against empirical data. [7] [60] Newer developments have occurred in dynamic programming and modeling optimization with risk and uncertainty, including applications to portfolio theory, the economics of information, and search theory. [59]

Optimality properties for an entire market system may be stated in mathematical terms, as in formulation of the two fundamental theorems of welfare economics [61] and in the Arrow–Debreu model of general equilibrium (also discussed below). [62] More concretely, many problems are amenable to analytical (formulaic) solution. Many others may be sufficiently complex to require numerical methods of solution, aided by software. [56] Still others are complex but tractable enough to allow computable methods of solution, in particular computable general equilibrium models for the entire economy. [63]

Linear and nonlinear programming have profoundly affected microeconomics, which had earlier considered only equality constraints. [64] Many of the mathematical economists who received Nobel Prizes in Economics had conducted notable research using linear programming: Leonid Kantorovich, Leonid Hurwicz, Tjalling Koopmans, Kenneth J. Arrow, Robert Dorfman, Paul Samuelson and Robert Solow. [65] Both Kantorovich and Koopmans acknowledged that George B. Dantzig deserved to share their Nobel Prize for linear programming. Economists who conducted research in nonlinear programming also have won the Nobel prize, notably Ragnar Frisch in addition to Kantorovich, Hurwicz, Koopmans, Arrow, and Samuelson.

Linear optimization

Linear programming was developed to aid the allocation of resources in firms and in industries during the 1930s in Russia and during the 1940s in the United States. During the Berlin airlift (1948), linear programming was used to plan the shipment of supplies to prevent Berlin from starving after the Soviet blockade. [66] [67]

Nonlinear programming

Extensions to nonlinear optimization with inequality constraints were achieved in 1951 by Albert W. Tucker and Harold Kuhn, who considered the nonlinear optimization problem:

Minimize subject to and where
is the function to be minimized
are the functions of the inequality constraints where
are the functions of the equality constraints where .

In allowing inequality constraints, the Kuhn–Tucker approach generalized the classic method of Lagrange multipliers, which (until then) had allowed only equality constraints. [68] The Kuhn–Tucker approach inspired further research on Lagrangian duality, including the treatment of inequality constraints. [69] [70] The duality theory of nonlinear programming is particularly satisfactory when applied to convex minimization problems, which enjoy the convex-analytic duality theory of Fenchel and Rockafellar; this convex duality is particularly strong for polyhedral convex functions, such as those arising in linear programming. Lagrangian duality and convex analysis are used daily in operations research, in the scheduling of power plants, the planning of production schedules for factories, and the routing of airlines (routes, flights, planes, crews). [70]

Variational calculus and optimal control

Economic dynamics allows for changes in economic variables over time, including in dynamic systems. The problem of finding optimal functions for such changes is studied in variational calculus and in optimal control theory. Before the Second World War, Frank Ramsey and Harold Hotelling used the calculus of variations to that end.

Following Richard Bellman's work on dynamic programming and the 1962 English translation of L. Pontryagin et al.'s earlier work, [71] optimal control theory was used more extensively in economics in addressing dynamic problems, especially as to economic growth equilibrium and stability of economic systems, [72] of which a textbook example is optimal consumption and saving. [73] A crucial distinction is between deterministic and stochastic control models. [74] Other applications of optimal control theory include those in finance, inventories, and production for example. [75]

Functional analysis

It was in the course of proving of the existence of an optimal equilibrium in his 1937 model of economic growth that John von Neumann introduced functional analytic methods to include topology in economic theory, in particular, fixed-point theory through his generalization of Brouwer's fixed-point theorem. [8] [44] [76] Following von Neumann's program, Kenneth Arrow and Gérard Debreu formulated abstract models of economic equilibria using convex sets and fixed–point theory. In introducing the Arrow–Debreu model in 1954, they proved the existence (but not the uniqueness) of an equilibrium and also proved that every Walras equilibrium is Pareto efficient; in general, equilibria need not be unique. [77] In their models, the ("primal") vector space represented quantities while the "dual" vector space represented prices. [78]

In Russia, the mathematician Leonid Kantorovich developed economic models in partially ordered vector spaces, that emphasized the duality between quantities and prices. [79] Kantorovich renamed prices as "objectively determined valuations" which were abbreviated in Russian as "o. o. o.", alluding to the difficulty of discussing prices in the Soviet Union. [78] [80] [81]

Even in finite dimensions, the concepts of functional analysis have illuminated economic theory, particularly in clarifying the role of prices as normal vectors to a hyperplane supporting a convex set, representing production or consumption possibilities. However, problems of describing optimization over time or under uncertainty require the use of infinite–dimensional function spaces, because agents are choosing among functions or stochastic processes. [78] [82] [83] [84]

Differential decline and rise

John von Neumann's work on functional analysis and topology broke new ground in mathematics and economic theory. [44] [85] It also left advanced mathematical economics with fewer applications of differential calculus. In particular, general equilibrium theorists used general topology, convex geometry, and optimization theory more than differential calculus, because the approach of differential calculus had failed to establish the existence of an equilibrium.

However, the decline of differential calculus should not be exaggerated, because differential calculus has always been used in graduate training and in applications. Moreover, differential calculus has returned to the highest levels of mathematical economics, general equilibrium theory (GET), as practiced by the "GET-set" (the humorous designation due to Jacques H. Drèze). In the 1960s and 1970s, however, Gérard Debreu and Stephen Smale led a revival of the use of differential calculus in mathematical economics. In particular, they were able to prove the existence of a general equilibrium, where earlier writers had failed, because of their novel mathematics: Baire category from general topology and Sard's lemma from differential topology. Other economists associated with the use of differential analysis include Egbert Dierker, Andreu Mas-Colell, and Yves Balasko. [86] [87] These advances have changed the traditional narrative of the history of mathematical economics, following von Neumann, which celebrated the abandonment of differential calculus.

Game theory

John von Neumann, working with Oskar Morgenstern on the theory of games, broke new mathematical ground in 1944 by extending functional analytic methods related to convex sets and topological fixed-point theory to economic analysis. [8] [85] Their work thereby avoided the traditional differential calculus, for which the maximum–operator did not apply to non-differentiable functions. Continuing von Neumann's work in cooperative game theory, game theorists Lloyd S. Shapley, Martin Shubik, Hervé Moulin, Nimrod Megiddo, Bezalel Peleg influenced economic research in politics and economics. For example, research on the fair prices in cooperative games and fair values for voting games led to changed rules for voting in legislatures and for accounting for the costs in public–works projects. For example, cooperative game theory was used in designing the water distribution system of Southern Sweden and for setting rates for dedicated telephone lines in the US.

Earlier neoclassical theory had bounded only the range of bargaining outcomes and in special cases, for example bilateral monopoly or along the contract curve of the Edgeworth box. [88] Von Neumann and Morgenstern's results were similarly weak. Following von Neumann's program, however, John Nash used fixed–point theory to prove conditions under which the bargaining problem and noncooperative games can generate a unique equilibrium solution. [89] Noncooperative game theory has been adopted as a fundamental aspect of experimental economics, [90] behavioral economics, [91] information economics, [92] industrial organization, [93] and political economy. [94] It has also given rise to the subject of mechanism design (sometimes called reverse game theory), which has private and public-policy applications as to ways of improving economic efficiency through incentives for information sharing. [95]

In 1994, Nash, John Harsanyi, and Reinhard Selten received the Nobel Memorial Prize in Economic Sciences their work on non–cooperative games. Harsanyi and Selten were awarded for their work on repeated games. Later work extended their results to computational methods of modeling. [96]

Agent-based computational economics

Agent-based computational economics (ACE) as a named field is relatively recent, dating from about the 1990s as to published work. It studies economic processes, including whole economies, as dynamic systems of interacting agents over time. As such, it falls in the paradigm of complex adaptive systems. [97] In corresponding agent-based models, agents are not real people but "computational objects modeled as interacting according to rules" ... "whose micro-level interactions create emergent patterns" in space and time. [98] The rules are formulated to predict behavior and social interactions based on incentives and information. The theoretical assumption of mathematical optimization by agents markets is replaced by the less restrictive postulate of agents with bounded rationality adapting to market forces. [99]

ACE models apply numerical methods of analysis to computer-based simulations of complex dynamic problems for which more conventional methods, such as theorem formulation, may not find ready use. [100] Starting from specified initial conditions, the computational economic system is modeled as evolving over time as its constituent agents repeatedly interact with each other. In these respects, ACE has been characterized as a bottom-up culture-dish approach to the study of the economy. [101] In contrast to other standard modeling methods, ACE events are driven solely by initial conditions, whether or not equilibria exist or are computationally tractable. ACE modeling, however, includes agent adaptation, autonomy, and learning. [102] It has a similarity to, and overlap with, game theory as an agent-based method for modeling social interactions. [96] Other dimensions of the approach include such standard economic subjects as competition and collaboration, [103] market structure and industrial organization, [104] transaction costs, [105] welfare economics [106] and mechanism design, [95] information and uncertainty, [107] and macroeconomics. [108] [109]

The method is said to benefit from continuing improvements in modeling techniques of computer science and increased computer capabilities. Issues include those common to experimental economics in general [110] and by comparison [111] and to development of a common framework for empirical validation and resolving open questions in agent-based modeling. [112] The ultimate scientific objective of the method has been described as "test[ing] theoretical findings against real-world data in ways that permit empirically supported theories to cumulate over time, with each researcher's work building appropriately on the work that has gone before". [113]

Mathematicization of economics

The surface of the Volatility smile is a 3-D surface whereby the current market implied volatility (Z-axis) for all options on the underlier is plotted against strike price and time to maturity (X & Y-axes). Ivsrf.gif
The surface of the Volatility smile is a 3-D surface whereby the current market implied volatility (Z-axis) for all options on the underlier is plotted against strike price and time to maturity (X & Y-axes).

Over the course of the 20th century, articles in "core journals" [115] in economics have been almost exclusively written by economists in academia. As a result, much of the material transmitted in those journals relates to economic theory, and "economic theory itself has been continuously more abstract and mathematical." [116] A subjective assessment of mathematical techniques [117] employed in these core journals showed a decrease in articles that use neither geometric representations nor mathematical notation from 95% in 1892 to 5.3% in 1990. [118] A 2007 survey of ten of the top economic journals finds that only 5.8% of the articles published in 2003 and 2004 both lacked statistical analysis of data and lacked displayed mathematical expressions that were indexed with numbers at the margin of the page. [119]

Econometrics

Between the world wars, advances in mathematical statistics and a cadre of mathematically trained economists led to econometrics, which was the name proposed for the discipline of advancing economics by using mathematics and statistics. Within economics, "econometrics" has often been used for statistical methods in economics, rather than mathematical economics. Statistical econometrics features the application of linear regression and time series analysis to economic data.

Ragnar Frisch coined the word "econometrics" and helped to found both the Econometric Society in 1930 and the journal Econometrica in 1933. [120] [121] A student of Frisch's, Trygve Haavelmo published The Probability Approach in Econometrics in 1944, where he asserted that precise statistical analysis could be used as a tool to validate mathematical theories about economic actors with data from complex sources. [122] This linking of statistical analysis of systems to economic theory was also promulgated by the Cowles Commission (now the Cowles Foundation) throughout the 1930s and 1940s. [123]

The roots of modern econometrics can be traced to the American economist Henry L. Moore. Moore studied agricultural productivity and attempted to fit changing values of productivity for plots of corn and other crops to a curve using different values of elasticity. Moore made several errors in his work, some from his choice of models and some from limitations in his use of mathematics. The accuracy of Moore's models also was limited by the poor data for national accounts in the United States at the time. While his first models of production were static, in 1925 he published a dynamic "moving equilibrium" model designed to explain business cycles—this periodic variation from over-correction in supply and demand curves is now known as the cobweb model. A more formal derivation of this model was made later by Nicholas Kaldor, who is largely credited for its exposition. [124]

Application

The IS/LM model is a Keynesian macroeconomic model designed to make predictions about the intersection of "real" economic activity (e.g. spending, income, savings rates) and decisions made in the financial markets (Money supply and Liquidity preference). The model is no longer widely taught at the graduate level but is common in undergraduate macroeconomics courses. Islm.svg
The IS/LM model is a Keynesian macroeconomic model designed to make predictions about the intersection of "real" economic activity (e.g. spending, income, savings rates) and decisions made in the financial markets (Money supply and Liquidity preference). The model is no longer widely taught at the graduate level but is common in undergraduate macroeconomics courses.

Much of classical economics can be presented in simple geometric terms or elementary mathematical notation. Mathematical economics, however, conventionally makes use of calculus and matrix algebra in economic analysis in order to make powerful claims that would be more difficult without such mathematical tools. These tools are prerequisites for formal study, not only in mathematical economics but in contemporary economic theory in general. Economic problems often involve so many variables that mathematics is the only practical way of attacking and solving them. Alfred Marshall argued that every economic problem which can be quantified, analytically expressed and solved, should be treated by means of mathematical work. [126]

Economics has become increasingly dependent upon mathematical methods and the mathematical tools it employs have become more sophisticated. As a result, mathematics has become considerably more important to professionals in economics and finance. Graduate programs in both economics and finance require strong undergraduate preparation in mathematics for admission and, for this reason, attract an increasingly high number of mathematicians. Applied mathematicians apply mathematical principles to practical problems, such as economic analysis and other economics-related issues, and many economic problems are often defined as integrated into the scope of applied mathematics. [18]

This integration results from the formulation of economic problems as stylized models with clear assumptions and falsifiable predictions. This modeling may be informal or prosaic, as it was in Adam Smith's The Wealth of Nations , or it may be formal, rigorous and mathematical.

Broadly speaking, formal economic models may be classified as stochastic or deterministic and as discrete or continuous. At a practical level, quantitative modeling is applied to many areas of economics and several methodologies have evolved more or less independently of each other. [127]

Example: The effect of a corporate tax cut on wages

The great appeal of mathematical economics is that it brings a degree of rigor to economic thinking, particularly around charged political topics. For example, during the discussion of the efficacy of a corporate tax cut for increasing the wages of workers, a simple mathematical model proved beneficial to understanding the issues at hand.

As an intellectual exercise, the following problem was posed by Prof. Greg Mankiw of Harvard University: [128]

An open economy has the production function , where is output per worker and is capital per worker. The capital stock adjusts so that the after-tax marginal product of capital equals the exogenously given world interest rate ...How much will the tax cut increase wages?

To answer this question, we follow John H. Cochrane of the Hoover Institution. [129] Suppose an open economy has the production function:Where the variables in this equation are:

The standard choice for the production function is the Cobb-Douglas production function:where is the factor of productivity - assumed to be a constant. A corporate tax cut in this model is equivalent to a tax on capital. With taxes, firms look to maximize:where is the capital tax rate, is wages per worker, and is the exogenous interest rate. Then the first-order optimality conditions become:Therefore, the optimality conditions imply that:Define total taxes . This implies that taxes per worker are:Then the change in taxes per worker, given the tax rate, is:To find the change in wages, we differentiate the second optimality condition for the per worker wages to obtain:Assuming that the interest rate is fixed at , so that , we may differentiate the first optimality condition for the interest rate to find:For the moment, let's focus only on the static effect of a capital tax cut, so that . If we substitute this equation into equation for wage changes with respect to the tax rate, then we find that:Therefore, the static effect of a capital tax cut on wages is:Based on the model, it seems possible that we may achieve a rise in the wage of a worker greater than the amount of the tax cut. But that only considers the static effect, and we know that the dynamic effect must be accounted for. In the dynamic model, we may rewrite the equation for changes in taxes per worker with respect to the tax rate as:Recalling that , we have that:Using the Cobb-Douglas production function, we have that:Therefore, the dynamic effect of a capital tax cut on wages is:If we take , then the dynamic effect of lowering capital taxes on wages will be even larger than the static effect. Moreover, if there are positive externalities to capital accumulation, the effect of the tax cut on wages would be larger than in the model we just derived. It is important to note that the result is a combination of:

  1. The standard result that in a small open economy labor bears 100% of a small capital income tax
  2. The fact that, starting at a positive tax rate, the burden of a tax increase exceeds revenue collection due to the first-order deadweight loss

This result showing that, under certain assumptions, a corporate tax cut can boost the wages of workers by more than the lost revenue does not imply that the magnitude is correct. Rather, it suggests a basis for policy analysis that is not grounded in handwaving. If the assumptions are reasonable, then the model is an acceptable approximation of reality; if they are not, then better models should be developed.

CES production function

Now let's assume that instead of the Cobb-Douglas production function we have a more general constant elasticity of substitution (CES) production function:where ; is the elasticity of substitution between capital and labor. The relevant quantity we want to calculate is , which may be derived as:Therefore, we may use this to find that:Therefore, under a general CES model, the dynamic effect of a capital tax cut on wages is:We recover the Cobb-Douglas solution when . When , which is the case when perfect substitutes exist, we find that - there is no effect of changes in capital taxes on wages. And when , which is the case when perfect complements exist, we find that - a cut in capital taxes increases wages by exactly one dollar.

Criticisms and defences

Adequacy of mathematics for qualitative and complicated economics

The Austrian school — while making many of the same normative economic arguments as mainstream economists from marginalist traditions, such as the Chicago school — differs methodologically from mainstream neoclassical schools of economics, in particular in their sharp critiques of the mathematization of economics. [130] Friedrich Hayek contended that the use of formal techniques projects a scientific exactness that does not appropriately account for informational limitations faced by real economic agents. [131]

In an interview in 1999, the economic historian Robert Heilbroner stated: [132]

I guess the scientific approach began to penetrate and soon dominate the profession in the past twenty to thirty years. This came about in part because of the "invention" of mathematical analysis of various kinds and, indeed, considerable improvements in it. This is the age in which we have not only more data but more sophisticated use of data. So there is a strong feeling that this is a data-laden science and a data-laden undertaking, which, by virtue of the sheer numerics, the sheer equations, and the sheer look of a journal page, bears a certain resemblance to science . . . That one central activity looks scientific. I understand that. I think that is genuine. It approaches being a universal law. But resembling a science is different from being a science.

Heilbroner stated that "some/much of economics is not naturally quantitative and therefore does not lend itself to mathematical exposition." [133]

Testing predictions of mathematical economics

Philosopher Karl Popper discussed the scientific standing of economics in the 1940s and 1950s. He argued that mathematical economics suffered from being tautological. In other words, insofar as economics became a mathematical theory, mathematical economics ceased to rely on empirical refutation but rather relied on mathematical proofs and disproof. [134] According to Popper, falsifiable assumptions can be tested by experiment and observation while unfalsifiable assumptions can be explored mathematically for their consequences and for their consistency with other assumptions. [135]

Sharing Popper's concerns about assumptions in economics generally, and not just mathematical economics, Milton Friedman declared that "all assumptions are unrealistic". Friedman proposed judging economic models by their predictive performance rather than by the match between their assumptions and reality. [136]

Mathematical economics as a form of pure mathematics

Considering mathematical economics, J.M. Keynes wrote in The General Theory: [137]

It is a great fault of symbolic pseudo-mathematical methods of formalising a system of economic analysis ... that they expressly assume strict independence between the factors involved and lose their cogency and authority if this hypothesis is disallowed; whereas, in ordinary discourse, where we are not blindly manipulating and know all the time what we are doing and what the words mean, we can keep ‘at the back of our heads’ the necessary reserves and qualifications and the adjustments which we shall have to make later on, in a way in which we cannot keep complicated partial differentials ‘at the back’ of several pages of algebra which assume they all vanish. Too large a proportion of recent ‘mathematical’ economics are merely concoctions, as imprecise as the initial assumptions they rest on, which allow the author to lose sight of the complexities and interdependencies of the real world in a maze of pretentious and unhelpful symbols.

Defense of mathematical economics

In response to these criticisms, Paul Samuelson argued that mathematics is a language, repeating a thesis of Josiah Willard Gibbs. In economics, the language of mathematics is sometimes necessary for representing substantive problems. Moreover, mathematical economics has led to conceptual advances in economics. [138] In particular, Samuelson gave the example of microeconomics, writing that "few people are ingenious enough to grasp [its] more complex parts... without resorting to the language of mathematics, while most ordinary individuals can do so fairly easily with the aid of mathematics." [139]

Some economists state that mathematical economics deserves support just like other forms of mathematics, particularly its neighbors in mathematical optimization and mathematical statistics and increasingly in theoretical computer science. Mathematical economics and other mathematical sciences have a history in which theoretical advances have regularly contributed to the reform of the more applied branches of economics. In particular, following the program of John von Neumann, game theory now provides the foundations for describing much of applied economics, from statistical decision theory (as "games against nature") and econometrics to general equilibrium theory and industrial organization. In the last decade, with the rise of the internet, mathematical economists and optimization experts and computer scientists have worked on problems of pricing for on-line services --- their contributions using mathematics from cooperative game theory, nondifferentiable optimization, and combinatorial games.

Robert M. Solow concluded that mathematical economics was the core "infrastructure" of contemporary economics:

Economics is no longer a fit conversation piece for ladies and gentlemen. It has become a technical subject. Like any technical subject it attracts some people who are more interested in the technique than the subject. That is too bad, but it may be inevitable. In any case, do not kid yourself: the technical core of economics is indispensable infrastructure for the political economy. That is why, if you consult [a reference in contemporary economics] looking for enlightenment about the world today, you will be led to technical economics, or history, or nothing at all. [140]

Mathematical economists

Prominent mathematical economists include the following.

19th century

20th century

See also

Related Research Articles

In the mathematical field of differential geometry, the Riemann curvature tensor or Riemann–Christoffel tensor is the most common way used to express the curvature of Riemannian manifolds. It assigns a tensor to each point of a Riemannian manifold. It is a local invariant of Riemannian metrics which measures the failure of the second covariant derivatives to commute. A Riemannian manifold has zero curvature if and only if it is flat, i.e. locally isometric to the Euclidean space. The curvature tensor can also be defined for any pseudo-Riemannian manifold, or indeed any manifold equipped with an affine connection.

<span class="mw-page-title-main">Anti-de Sitter space</span> Maximally symmetric Lorentzian manifold with a negative cosmological constant

In mathematics and physics, n-dimensional anti-de Sitter space (AdSn) is a maximally symmetric Lorentzian manifold with constant negative scalar curvature. Anti-de Sitter space and de Sitter space are named after Willem de Sitter (1872–1934), professor of astronomy at Leiden University and director of the Leiden Observatory. Willem de Sitter and Albert Einstein worked together closely in Leiden in the 1920s on the spacetime structure of the universe. Paul Dirac was the first person to rigorously explore anti-de Sitter space, doing so in 1963.

<span class="mw-page-title-main">Step response</span> Time behavior of a system controlled by Heaviside step functions

The step response of a system in a given initial state consists of the time evolution of its outputs when its control inputs are Heaviside step functions. In electronic engineering and control theory, step response is the time behaviour of the outputs of a general system when its inputs change from zero to one in a very short time. The concept can be extended to the abstract mathematical notion of a dynamical system using an evolution parameter.

Computational economics is an interdisciplinary research discipline that combines methods in computational science and economics to solve complex economic problems. This subject encompasses computational modeling of economic systems. Some of these areas are unique, while others established areas of economics by allowing robust data analytics and solutions of problems that would be arduous to research without computers and associated numerical methods.

Aeroacoustics is a branch of acoustics that studies noise generation via either turbulent fluid motion or aerodynamic forces interacting with surfaces. Noise generation can also be associated with periodically varying flows. A notable example of this phenomenon is the Aeolian tones produced by wind blowing over fixed objects.

The Ramsey–Cass–Koopmans model, or Ramsey growth model, is a neoclassical model of economic growth based primarily on the work of Frank P. Ramsey, with significant extensions by David Cass and Tjalling Koopmans. The Ramsey–Cass–Koopmans model differs from the Solow–Swan model in that the choice of consumption is explicitly microfounded at a point in time and so endogenizes the savings rate. As a result, unlike in the Solow–Swan model, the saving rate may not be constant along the transition to the long run steady state. Another implication of the model is that the outcome is Pareto optimal or Pareto efficient.

In statistics, the Durbin–Watson statistic is a test statistic used to detect the presence of autocorrelation at lag 1 in the residuals from a regression analysis. It is named after James Durbin and Geoffrey Watson. The small sample distribution of this ratio was derived by John von Neumann. Durbin and Watson applied this statistic to the residuals from least squares regressions, and developed bounds tests for the null hypothesis that the errors are serially uncorrelated against the alternative that they follow a first order autoregressive process. Note that the distribution of this test statistic does not depend on the estimated regression coefficients and the variance of the errors.

<span class="mw-page-title-main">Lattice Boltzmann methods</span> Class of computational fluid dynamics methods

The lattice Boltzmann methods (LBM), originated from the lattice gas automata (LGA) method (Hardy-Pomeau-Pazzis and Frisch-Hasslacher-Pomeau models), is a class of computational fluid dynamics (CFD) methods for fluid simulation. Instead of solving the Navier–Stokes equations directly, a fluid density on a lattice is simulated with streaming and collision (relaxation) processes. The method is versatile as the model fluid can straightforwardly be made to mimic common fluid behaviour like vapour/liquid coexistence, and so fluid systems such as liquid droplets can be simulated. Also, fluids in complex environments such as porous media can be straightforwardly simulated, whereas with complex boundaries other CFD methods can be hard to work with.

Agent-based computational economics (ACE) is the area of computational economics that studies economic processes, including whole economies, as dynamic systems of interacting agents. As such, it falls in the paradigm of complex adaptive systems. In corresponding agent-based models, the "agents" are "computational objects modeled as interacting according to rules" over space and time, not real people. The rules are formulated to model behavior and social interactions based on incentives and information. Such rules could also be the result of optimization, realized through use of AI methods.

The Hasse–Davenport relations, introduced by Davenport and Hasse, are two related identities for Gauss sums, one called the Hasse–Davenport lifting relation, and the other called the Hasse–Davenport product relation. The Hasse–Davenport lifting relation is an equality in number theory relating Gauss sums over different fields. Weil (1949) used it to calculate the zeta function of a Fermat hypersurface over a finite field, which motivated the Weil conjectures.

In actuarial science and applied probability, ruin theory uses mathematical models to describe an insurer's vulnerability to insolvency/ruin. In such models key quantities of interest are the probability of ruin, distribution of surplus immediately prior to ruin and deficit at time of ruin.

Discrete Morse theory is a combinatorial adaptation of Morse theory developed by Robin Forman. The theory has various practical applications in diverse fields of applied mathematics and computer science, such as configuration spaces, homology computation, denoising, mesh compression, and topological data analysis.

In mathematics, a commutation theorem for traces explicitly identifies the commutant of a specific von Neumann algebra acting on a Hilbert space in the presence of a trace.

<span class="mw-page-title-main">Moving load</span> Load that changes in time

In structural dynamics, a moving load changes the point at which the load is applied over time. Examples include a vehicle that travels across a bridge and a train moving along a track.

In queueing theory, a discipline within the mathematical theory of probability, an M/D/1 queue represents the queue length in a system having a single server, where arrivals are determined by a Poisson process and job service times are fixed (deterministic). The model name is written in Kendall's notation. Agner Krarup Erlang first published on this model in 1909, starting the subject of queueing theory. An extension of this model with more than one server is the M/D/c queue.

In the theory of C*-algebras, the universal representation of a C*-algebra is a faithful representation which is the direct sum of the GNS representations corresponding to the states of the C*-algebra. The various properties of the universal representation are used to obtain information about the ideals and quotients of the C*-algebra. The close relationship between an arbitrary representation of a C*-algebra and its universal representation can be exploited to obtain several criteria for determining whether a linear functional on the algebra is ultraweakly continuous. The method of using the properties of the universal representation as a tool to prove results about the C*-algebra and its representations is commonly referred to as universal representation techniques in the literature.

In mathematics, Katugampola fractional operators are integral operators that generalize the Riemann–Liouville and the Hadamard fractional operators into a unique form. The Katugampola fractional integral generalizes both the Riemann–Liouville fractional integral and the Hadamard fractional integral into a single form and It is also closely related to the Erdelyi–Kober operator that generalizes the Riemann–Liouville fractional integral. Katugampola fractional derivative has been defined using the Katugampola fractional integral and as with any other fractional differential operator, it also extends the possibility of taking real number powers or complex number powers of the integral and differential operators.

A functional differential equation is a differential equation with deviating argument. That is, a functional differential equation is an equation that contains a function and some of its derivatives evaluated at different argument values.

Data-driven control systems are a broad family of control systems, in which the identification of the process model and/or the design of the controller are based entirely on experimental data collected from the plant.

The recharge oscillator model for El Niño–Southern Oscillation (ENSO) is a theory described for the first time in 1997 by Jin., which explains the periodical variation of the sea surface temperature (SST) and thermocline depth that occurs in the central equatorial Pacific Ocean. The physical mechanisms at the basis of this oscillation are periodical recharges and discharges of the zonal mean equatorial heat content, due to ocean-atmosphere interaction. Other theories have been proposed to model ENSO, such as the delayed oscillator, the western Pacific oscillator and the advective reflective oscillator. A unified and consistent model has been proposed by Wang in 2001, in which the recharge oscillator model is included as a particular case.

References

  1. Elaborated at the JEL classification codes, Mathematical and quantitative methods JEL: C Subcategories.
  2. 1 2 Chiang, Alpha C.; Kevin Wainwright (2005). Fundamental Methods of Mathematical Economics. McGraw-Hill Irwin. pp. 3–4. ISBN   978-0-07-010910-0. TOC. Archived 2012-03-08 at the Wayback Machine
  3. Debreu, Gérard ([1987] 2008). "mathematical economics", section II, The New Palgrave Dictionary of Economics , 2nd Edition. Abstract. Archived 2013-05-16 at the Wayback Machine Republished with revisions from 1986, "Theoretic Models: Mathematical Form and Economic Content", Econometrica, 54(6), pp. 1259 Archived 2017-08-05 at the Wayback Machine -1270.
  4. Varian, Hal (1997). "What Use Is Economic Theory?" in A. D'Autume and J. Cartelier, ed., Is Economics Becoming a Hard Science?, Edward Elgar. Pre-publication PDF. Archived 2006-06-25 at the Wayback Machine Retrieved 2008-04-01.
  5. Description Archived 2023-07-01 at the Wayback Machine and Contents Archived 2023-07-01 at the Wayback Machine .
  6. Chiang, Alpha C. (1992). Elements of Dynamic Optimization, Waveland. TOC & Amazon.com link Archived 2016-03-03 at the Wayback Machine to inside, first pp.
  7. 1 2 3 4 Samuelson, Paul (1947) [1983]. Foundations of Economic Analysis. Harvard University Press. ISBN   978-0-674-31301-9.
  8. 1 2 3 4
  9. Schumpeter, J.A. (1954). Elizabeth B. Schumpeter (ed.). History of Economic Analysis. New York: Oxford University Press. pp. 209–212. ISBN   978-0-04-330086-2. OCLC   13498913. Archived from the original on 2023-07-01. Retrieved 2020-05-28.
  10. Schumpeter (1954) p. 212-215
  11. Schnieder, Erich (1934). "Johann Heinrich von Thünen". Econometrica . 2 (1): 1–12. doi:10.2307/1907947. ISSN   0012-9682. JSTOR   1907947. OCLC   35705710.
  12. Schumpeter (1954) p. 465-468
  13. Philip Mirowski, 1991. "The When, the How and the Why of Mathematical Expression in the History of Economics Analysis", Journal of Economic Perspectives, 5(1) pp. 145-157.
  14. Weintraub, E. Roy (2008). "mathematics and economics", The New Palgrave Dictionary of Economics, 2nd Edition. Abstract Archived 2013-05-16 at the Wayback Machine .
  15. Jevons, W.S. (1866). "Brief Account of a General Mathematical Theory of Political Economy", Journal of the Royal Statistical Society, XXIX (June) pp. 282–87. Read in Section F of the British Association, 1862. PDF.
  16. Jevons, W. Stanley (1871). The Principles of Political Economy, pp. 4, 25. Macmillan. The Theory of Political Economy, jevons 1871.
  17. See the preface Archived 2023-07-01 at the Wayback Machine to Irving Fisher's 1897 work, A brief introduction to the infinitesimal calculus: designed especially to aid in reading mathematical economics and statistics.
  18. 1 2 Sheila C., Dow (1999-05-21). "The Use of Mathematics in Economics". ESRC Public Understanding of Mathematics Seminar. Birmingham: Economic and Social Research Council. Retrieved 2008-07-06.
  19. While the concept of cardinality has fallen out of favor in neoclassical economics, the differences between cardinal utility and ordinal utility are minor for most applications.
  20. 1 2 Nicola, PierCarlo (2000). Mainstream Mathermatical Economics in the 20th Century. Springer. p. 4. ISBN   978-3-540-67084-1. Archived from the original on 2023-07-01. Retrieved 2008-08-21.
  21. Augustin Cournot (1838, tr. 1897) Researches into the Mathematical Principles of Wealth. Links to description Archived 2023-07-01 at the Wayback Machine and chapters. Archived 2023-07-01 at the Wayback Machine
  22. 1 2 Hotelling, Harold (1990). "Stability in Competition". In Darnell, Adrian C. (ed.). The Collected Economics Articles of Harold Hotelling. Springer. pp. 51, 52. ISBN   978-3-540-97011-8. OCLC   20217006. Archived from the original on 2023-07-01. Retrieved 2008-08-21.
  23. "Antoine Augustin Cournot, 1801-1877". The History of Economic Thought Website. The New School for Social Research. Archived from the original on 2000-07-09. Retrieved 2008-08-21.
  24. Gibbons, Robert (1992). Game Theory for Applied Economists. Princeton, New Jersey: Princeton University Press. pp. 14, 15. ISBN   978-0-691-00395-5.
  25. Nicola, p. 9-12
  26. Edgeworth, Francis Ysidro (September 5, 1889). "The Mathematical Theory of Political Economy: Review of Léon Walras, Éléments d'économie politique pure" (PDF). Nature . 40 (1036): 434–436. doi:10.1038/040434a0. ISSN   0028-0836. S2CID   21004543. Archived from the original (PDF) on April 11, 2003. Retrieved 2008-08-21.
  27. Nicholson, Walter; Snyder, Christopher, p. 350-353.
  28. Dixon, Robert. "Walras Law and Macroeconomics". Walras Law Guide. Department of Economics, University of Melbourne. Archived from the original on April 17, 2008. Retrieved 2008-09-28.
  29. Dixon, Robert. "A Formal Proof of Walras Law". Walras Law Guide. Department of Economics, University of Melbourne. Archived from the original on April 30, 2008. Retrieved 2008-09-28.
  30. Rima, Ingrid H. (1977). "Neoclassicism and Dissent 1890-1930". In Weintraub, Sidney (ed.). Modern Economic Thought. University of Pennsylvania Press. pp. 10, 11. ISBN   978-0-8122-7712-8. Archived from the original on 2023-07-01. Retrieved 2021-05-31.
  31. Heilbroner, Robert L. (1999) [1953]. The Worldly Philosophers (Seventh ed.). New York: Simon and Schuster. pp. 172–175, 313. ISBN   978-0-684-86214-9. Archived from the original on 2023-07-01. Retrieved 2020-05-28.
  32. Edgeworth, Francis Ysidro (1961) [1881]. Mathematical Psychics. London: Kegan Paul [A. M. Kelley]. pp. 15–19. Archived from the original on 2023-07-01. Retrieved 2020-05-28.
  33. Nicola, p. 14, 15, 258-261
  34. Bowley, Arthur Lyon (1960) [1924]. The Mathematical Groundwork of Economics: an Introductory Treatise. Oxford: Clarendon Press [Kelly]. Archived from the original on 2023-07-01. Retrieved 2020-05-28.
  35. Gillies, D. B. (1969). "Solutions to general non-zero-sum games". In Tucker, A. W.; Luce, R. D. (eds.). Contributions to the Theory of Games. Annals of Mathematics. Vol. 40. Princeton, New Jersey: Princeton University Press. pp. 47–85. ISBN   978-0-691-07937-0. Archived from the original on 2023-07-01. Retrieved 2020-05-28.
  36. Moss, Lawrence S. (2003). "The Seligman-Edgeworth Debate about the Analysis of Tax Incidence: The Advent of Mathematical Economics, 1892–1910". History of Political Economy. 35 (2): 207, 212, 219, 234–237. doi:10.1215/00182702-35-2-205. ISSN   0018-2702.
  37. Hotelling, Harold (1990). "Note on Edgeworth's Taxation Phenomenon and Professor Garver's Additional Condition on Demand Functions". In Darnell, Adrian C. (ed.). The Collected Economics Articles of Harold Hotelling. Springer. pp. 94–122. ISBN   978-3-540-97011-8. OCLC   20217006. Archived from the original on 2023-07-01. Retrieved 2008-08-26.
  38. Herstein, I.N. (October 1953). "Some Mathematical Methods and Techniques in Economics". Quarterly of Applied Mathematics. 11 (3): 249–262. doi: 10.1090/qam/60205 . ISSN   1552-4485. [Pp. 249-62.
  39. Nicholson, Walter; Snyder, Christopher (2007). "General Equilibrium and Welfare". Intermediate Microeconomics and Its Applications (10th ed.). Thompson. pp. 364, 365. ISBN   978-0-324-31968-2.
  40. Blaug (2007), p. 185, 187
  41. Metzler, Lloyd (1948). "Review of Foundations of Economic Analysis". American Economic Review. 38 (5): 905–910. ISSN   0002-8282. JSTOR   1811704.
  42. 1 2 3 Neumann, J. von (1937). "Über ein ökonomisches Gleichungssystem und ein Verallgemeinerung des Brouwerschen Fixpunktsatzes", Ergebnisse eines Mathematischen Kolloquiums, 8, pp. 73–83, translated and published in 1945-46, as "A Model of General Equilibrium", Review of Economic Studies, 13, pp. 1–9.
  43. For this problem to have a unique solution, it suffices that the nonnegative matrices A and B satisfy an irreducibility condition, generalizing that of the Perron–Frobenius theorem of nonnegative matrices, which considers the (simplified) eigenvalue problem
    Aλ I q = 0,
    where the nonnegative matrix A must be square and where the diagonal matrix  I is the identity matrix. Von Neumann's irreducibility condition was called the "whales and wranglers" hypothesis by David Champernowne, who provided a verbal and economic commentary on the English translation of von Neumann's article. Von Neumann's hypothesis implied that every economic process used a positive amount of every economic good. Weaker "irreducibility" conditions were given by David Gale and by John Kemeny, Oskar Morgenstern, and Gerald L. Thompson in the 1950s and then by Stephen M. Robinson in the 1970s.
  44. David Gale. The theory of linear economic models. McGraw-Hill, New York, 1960.
  45. Morgenstern, Oskar; Thompson, Gerald L. (1976). Mathematical theory of expanding and contracting economies. Lexington Books. Lexington, Massachusetts: D. C. Heath and Company. pp. xviii+277.
  46. Alexander Schrijver, Theory of Linear and Integer Programming. John Wiley & sons, 1998, ISBN   0-471-98232-6.
    • Rockafellar, R. Tyrrell (1967). Monotone processes of convex and concave type. Memoirs of the American Mathematical Society. Providence, R.I.: American Mathematical Society. pp. i+74.
    • Rockafellar, R. T. (1974). "Convex algebra and duality in dynamic models of production". In Josef Loz; Maria Loz (eds.). Mathematical models in economics (Proc. Sympos. and Conf. von Neumann Models, Warsaw, 1972). Amsterdam: North-Holland and Polish Academy of Sciences (PAN). pp. 351–378.
    • Rockafellar, R. T. (1997) [1970]. Convex analysis. Princeton, New Jersey: Princeton University Press.
  47. Arrow, Kenneth; Samuelson, Paul; Harsanyi, John; Afriat, Sidney; Thompson, Gerald L.; Kaldor, Nicholas (1989). Mohammed Dore; Sukhamoy Chakravarty; Richard Goodwin (eds.). John Von Neumann and modern economics. Oxford:Clarendon. p. 261.
  48. Chapter 9.1 "The von Neumann growth model" (pages 277–299): Yinyu Ye. Interior point algorithms: Theory and analysis. Wiley. 1997.
  49. Screpanti, Ernesto; Zamagni, Stefano (1993). An Outline of the History of Economic Thought. New York: Oxford University Press. pp. 288–290. ISBN   978-0-19-828370-6. OCLC   57281275.
  50. David Gale. The theory of linear economic models. McGraw-Hill, New York, 1960.
  51. Morgenstern, Oskar; Thompson, Gerald L. (1976). Mathematical theory of expanding and contracting economies. Lexington Books. Lexington, Massachusetts: D. C. Heath and Company. pp. xviii+277.
  52. "The Nature of Mathematical Programming", Mathematical Programming Glossary, INFORMS Computing Society.
  53. 1 2 Schmedders, Karl (2008). "numerical optimization methods in economics", The New Palgrave Dictionary of Economics, 2nd Edition, v. 6, pp. 138–57. Abstract. Archived 2017-08-11 at the Wayback Machine
  54. Robbins, Lionel (1935, 2nd ed.). An Essay on the Nature and Significance of Economic Science , Macmillan, p. 16.
  55. Blume, Lawrence E. (2008). "duality", The New Palgrave Dictionary of Economics, 2nd Edition. Abstract. Archived 2017-02-02 at the Wayback Machine
  56. 1 2 Dixit, A. K. ([1976] 1990). Optimization in Economic Theory, 2nd ed., Oxford. Description Archived 2023-07-01 at the Wayback Machine and contents preview Archived 2023-07-01 at the Wayback Machine .
    • Geanakoplos, John ([1987] 2008). "Arrow–Debreu model of general equilibrium", The New Palgrave Dictionary of Economics, 2nd Edition. Abstract Archived 2017-08-11 at the Wayback Machine .
    • Arrow, Kenneth J., and Gérard Debreu (1954). "Existence of an Equilibrium for a Competitive Economy", Econometrica 22(3), pp. 265-290.

  57.  
    • Kubler, Felix (2008). "computation of general equilibria (new developments)", The New Palgrave Dictionary of Economics, 2nd Edition. Abstract. Archived 2017-08-11 at the Wayback Machine
  58. Nicola, p. 133
  59. Dorfman, Robert, Paul A. Samuelson, and Robert M. Solow (1958). Linear Programming and Economic Analysis. McGraw–Hill. Chapter-preview links. Archived 2023-07-01 at the Wayback Machine
  60. M. Padberg, Linear Optimization and Extensions, Second Edition, Springer-Verlag, 1999.
  61. Dantzig, George B. ([1987] 2008). "linear programming", The New Palgrave Dictionary of Economics, 2nd Edition. Abstract Archived 2017-08-11 at the Wayback Machine .
    • Intriligator, Michael D. (2008). "nonlinear programming", The New Palgrave Dictionary of Economics, 2nd Edition. TOC Archived 2016-03-04 at the Wayback Machine .
    • Blume, Lawrence E. (2008). "convex programming", The New Palgrave Dictionary of Economics, 2nd Edition.
    Abstract Archived 2017-10-18 at the Wayback Machine .
    • Kuhn, H. W.; Tucker, A. W. (1951). "Nonlinear programming". Proceedings of 2nd Berkeley Symposium. Berkeley: University of California Press. pp. 481–492.
    • Bertsekas, Dimitri P. (1999). Nonlinear Programming (Second ed.). Cambridge, Massachusetts.: Athena Scientific. ISBN   978-1-886529-00-7.
    • Vapnyarskii, I.B. (2001) [1994], "Lagrange multipliers", Encyclopedia of Mathematics , EMS Press .
    • Lasdon, Leon S. (1970). Optimization theory for large systems. Macmillan series in operations research. New York: The Macmillan Company. pp. xi+523. MR   0337317.
    • Lasdon, Leon S. (2002). Optimization theory for large systems (reprint of the 1970 Macmillan ed.). Mineola, New York: Dover Publications, Inc. pp. xiii+523. MR   1888251.
    • Hiriart-Urruty, Jean-Baptiste; Lemaréchal, Claude (1993). "XII Abstract duality for practitioners". Convex analysis and minimization algorithms, Volume II: Advanced theory and bundle methods. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Vol. 306. Berlin: Springer-Verlag. pp. 136–193 (and Bibliographical comments on pp. 334–335). ISBN   978-3-540-56852-0. MR   1295240.
  62. 1 2 Lemaréchal, Claude (2001). "Lagrangian relaxation". In Michael Jünger; Denis Naddef (eds.). Computational combinatorial optimization: Papers from the Spring School held in Schloß Dagstuhl, May 15–19, 2000. Lecture Notes in Computer Science. Vol. 2241. Berlin: Springer-Verlag. pp. 112–156. doi:10.1007/3-540-45586-8_4. ISBN   978-3-540-42877-0. MR   1900016. S2CID   9048698.
  63. Pontryagin, L. S.; Boltyanski, V. G., Gamkrelidze, R. V., Mischenko, E. F. (1962). The Mathematical Theory of Optimal Processes. New York: Wiley. ISBN   9782881240775. Archived from the original on 2023-07-01. Retrieved 2015-06-27.{{cite book}}: CS1 maint: multiple names: authors list (link)
  64. Stokey, Nancy L. and Robert E. Lucas with Edward Prescott (1989). Recursive Methods in Economic Dynamics, Harvard University Press, chapter 5. Desecription Archived 2017-08-11 at the Wayback Machine and chapter-preview links Archived 2023-07-01 at the Wayback Machine .
  65. Malliaris, A.G. (2008). "stochastic optimal control", The New Palgrave Dictionary of Economics, 2nd Edition. Abstract Archived 2017-10-18 at the Wayback Machine .
  66. Andrew McLennan, 2008. "fixed point theorems", The New Palgrave Dictionary of Economics, 2nd Edition. Abstract Archived 2016-03-06 at the Wayback Machine .
  67. Weintraub, E. Roy (1977). "General Equilibrium Theory". In Weintraub, Sidney (ed.). Modern Economic Thought. University of Pennsylvania Press. pp. 107–109. ISBN   978-0-8122-7712-8. Archived from the original on 2023-07-01. Retrieved 2020-05-28.
  68. 1 2 3 Kantorovich, Leonid, and Victor Polterovich (2008). "Functional analysis", in S. Durlauf and L. Blume, ed., The New Palgrave Dictionary of Economics, 2nd Edition. Abstract. Archived 2016-03-03 at the Wayback Machine , ed., Palgrave Macmillan.
  69. Kantorovich, L. V (1990). ""My journey in science (supposed report to the Moscow Mathematical Society)" [expanding Russian Math. Surveys 42 (1987), no. 2, pp. 233–270]". In Lev J. Leifman (ed.). Functional analysis, optimization, and mathematical economics: A collection of papers dedicated to the memory of Leonid Vitalʹevich Kantorovich. New York: The Clarendon Press, Oxford University Press. pp. 8–45. ISBN   978-0-19-505729-4. MR   0898626.
  70. Page 406: Polyak, B. T. (2002). "History of mathematical programming in the USSR: Analyzing the phenomenon (Chapter 3 The pioneer: L. V. Kantorovich, 1912–1986, pp. 405–407)". Mathematical Programming. Series B. 91 (ISMP 2000, Part 1 (Atlanta, GA), number 3): 401–416. doi:10.1007/s101070100258. MR   1888984. S2CID   13089965.
  71. "Leonid Vitaliyevich Kantorovich — Prize Lecture ("Mathematics in economics: Achievements, difficulties, perspectives")". Nobelprize.org. Archived from the original on 14 December 2010. Retrieved 12 Dec 2010.
  72. Aliprantis, Charalambos D.; Brown, Donald J.; Burkinshaw, Owen (1990). Existence and optimality of competitive equilibria. Berlin: Springer–Verlag. pp. xii+284. ISBN   978-3-540-52866-1. MR   1075992.
  73. Rockafellar, R. Tyrrell. Conjugate duality and optimization. Lectures given at the Johns Hopkins University, Baltimore, Maryland, June, 1973. Conference Board of the Mathematical Sciences Regional Conference Series in Applied Mathematics, No. 16. Society for Industrial and Applied Mathematics, Philadelphia, Pa., 1974. vi+74 pp.
  74. Lester G. Telser and Robert L. Graves Functional Analysis in Mathematical Economics: Optimization Over Infinite Horizons 1972. University of Chicago Press, 1972, ISBN   978-0-226-79190-6.
  75. 1 2 Neumann, John von, and Oskar Morgenstern (1944) Theory of Games and Economic Behavior , Princeton.
  76. Mas-Colell, Andreu (1985). The Theory of general economic equilibrium: A differentiable approach. Econometric Society monographs. Cambridge UP. ISBN   978-0-521-26514-0. MR   1113262.
  77. Yves Balasko. Foundations of the Theory of General Equilibrium, 1988, ISBN   0-12-076975-1.
  78. Creedy, John (2008). "Francis Ysidro (1845–1926)", The New Palgrave Dictionary of Economics, 2nd Edition. Abstract Archived 2017-08-11 at the Wayback Machine .
  79. From The New Palgrave Dictionary of Economics (2008), 2nd Edition:
    • Tirole, Jean (1988). The Theory of Industrial Organization, MIT Press. Description and chapter-preview links, pp. vii-ix, "General Organization", pp. 5-6, and "Non-Cooperative Game Theory: A User's Guide Manual,' " ch. 11, pp. 423-59.
    • Bagwell, Kyle, and Asher Wolinsky (2002). "Game theory and Industrial Organization", ch. 49, Handbook of Game Theory with Economic Applications,
    v. 3, pp. 1851 Archived 2016-01-02 at the Wayback Machine -1895.
    • Shubik, Martin (1981). "Game Theory Models and Methods in Political Economy", in Handbook of Mathematical Economics, v. 1, pp. 285–330. doi : 10.1016/S1573-4382(81)01011-4.
  80. 1 2
  81. 1 2
    • Halpern, Joseph Y. (2008). "computer science and game theory", The New Palgrave Dictionary of Economics, 2nd Edition. Abstract Archived 2017-11-05 at the Wayback Machine .
              
    • Shoham, Yoav (2008). "Computer Science and Game Theory", Communications of the ACM, 51(8), pp.
    75-79 Archived 2012-04-26 at the Wayback Machine .
            
    • Roth, Alvin E. (2002). "The Economist as Engineer: Game Theory, Experimentation, and Computation as Tools for Design Economics", Econometrica, 70(4), pp. 1341–1378.
    • Kirman, Alan (2008). "economy as a complex system", The New Palgrave Dictionary of Economics , 2nd Edition. Abstract Archived 2017-08-11 at the Wayback Machine .
    • Tesfatsion, Leigh (2003). "Agent-based Computational Economics: Modeling Economies as Complex Adaptive Systems", Information Sciences, 149(4), pp. 262-268.
  82. Scott E. Page (2008), "agent-based models", The New Palgrave Dictionary of Economics, 2nd Edition. Abstract Archived 2018-02-10 at the Wayback Machine .
    • Judd, Kenneth L. (2006). "Computationally Intensive Analyses in Economics", Handbook of Computational Economics, v. 2, ch. 17, Introduction, p. 883. Pp. 881- 893. Pre-pub PDF Archived 2022-01-21 at the Wayback Machine .
        • _____ (1998). Numerical Methods in Economics, MIT Press. Links to description and chapter previews.
    • Tesfatsion, Leigh (2002). "Agent-Based Computational Economics: Growing Economies from the Bottom Up", Artificial Life, 8(1), pp.55-82. Abstract Archived 2020-03-06 at the Wayback Machine and pre-pub PDF.
        • _____ (1997). "How Economists Can Get Alife", in W. B. Arthur, S. Durlauf, and D. Lane, eds., The Economy as an Evolving Complex System, II, pp. 533–564. Addison-Wesley. Pre-pub PDF Archived 2012-04-15 at the Wayback Machine .
  83. Tesfatsion, Leigh (2006), "Agent-Based Computational Economics: A Constructive Approach to Economic Theory", ch. 16, Handbook of Computational Economics, v. 2, part 2, ACE study of economic system. Abstract Archived 2018-08-09 at the Wayback Machine and pre-pub PDF Archived 2017-08-11 at the Wayback Machine .
  84. Axelrod, Robert (1997). The Complexity of Cooperation: Agent-Based Models of Competition and Collaboration, Princeton. Description Archived 2018-01-02 at the Wayback Machine , contents Archived 2018-01-02 at the Wayback Machine , and preview Archived 2023-07-01 at the Wayback Machine .
  85. Klosa, Tomas B., and Bart Nooteboom, 2001. "Agent-based Computational Transaction Cost Economics", Journal of Economic Dynamics and Control 25(3–4), pp. 503–52. Abstract. Archived 2020-06-22 at the Wayback Machine
  86. Axtell, Robert (2005). "The Complexity of Exchange", Economic Journal, 115(504, Features), pp. F193-F210 Archived 2017-08-08 at the Wayback Machine .
  87. Sandholm, Tuomas W., and Victor R. Lesser (2001)."Leveled Commitment Contracts and Strategic Breach", Games and Economic Behavior, 35(1-2), pp. 212-270 Archived 2020-12-04 at the Wayback Machine .
  88. Tesfatsion, Leigh (2006), "Agent-Based Computational Economics: A Constructive Approach to Economic Theory", ch. 16, Handbook of Computational Economics, v. 2, pp. 832–865. Abstract Archived 2018-08-09 at the Wayback Machine and pre-pub PDF Archived 2017-08-11 at the Wayback Machine .
  89. Smith, Vernon L. (2008). "experimental economics", The New Palgrave Dictionary of Economics, 2nd Edition. Abstract Archived 2012-01-19 at the Wayback Machine .
  90. Duffy, John (2006). "Agent-Based Models and Human Subject Experiments", ch. 19, Handbook of Computational Economics, v.2, pp. 949–101. Abstract Archived 2015-09-24 at the Wayback Machine .
    • Namatame, Akira, and Takao Terano (2002). "The Hare and the Tortoise: Cumulative Progress in Agent-based Simulation", in Agent-based Approaches in Economic and Social Complex Systems. pp. 3- 14, IOS Press. Description Archived 2012-04-05 at the Wayback Machine .
    • Fagiolo, Giorgio, Alessio Moneta, and Paul Windrum (2007). "A Critical Guide to Empirical Validation of Agent-Based Models in Economics: Methodologies, Procedures, and Open Problems", Computational Economics, 30, pp. 195 Archived 2023-07-01 at the Wayback Machine –226.
  91. links.
  92. Brockhaus, Oliver; Farkas, Michael; Ferraris, Andrew; Long, Douglas; Overhaus, Marcus (2000). Equity Derivatives and Market Risk Models. Risk Books. pp. 13–17. ISBN   978-1-899332-87-8. Archived from the original on 2023-07-01. Retrieved 2008-08-17.{{cite book}}: CS1 maint: multiple names: authors list (link)
  93. Liner, Gaines H. (2002). "Core Journals in Economics". Economic Inquiry. 40 (1): 140. doi:10.1093/ei/40.1.138.
  94. Stigler, George J.; Stigler, Steven J.; Friedland, Claire (April 1995). "The Journals of Economics". The Journal of Political Economy . 103 (2): 331–359. doi:10.1086/261986. ISSN   0022-3808. JSTOR   2138643. S2CID   154780520.
  95. Stigler et al. reviewed journal articles in core economic journals (as defined by the authors but meaning generally non-specialist journals) throughout the 20th century. Journal articles which at any point used geometric representation or mathematical notation were noted as using that level of mathematics as its "highest level of mathematical technique". The authors refer to "verbal techniques" as those which conveyed the subject of the piece without notation from geometry, algebra or calculus.
  96. Stigler et al., p. 342
  97. Sutter, Daniel and Rex Pjesky. "Where Would Adam Smith Publish Today?: The Near Absence of Math-free Research in Top Journals" (May 2007). Archived 2017-10-10 at the Wayback Machine
  98. Arrow, Kenneth J. (April 1960). "The Work of Ragnar Frisch, Econometrician". Econometrica . 28 (2): 175–192. doi:10.2307/1907716. ISSN   0012-9682. JSTOR   1907716.
  99. Bjerkholt, Olav (July 1995). "Ragnar Frisch, Editor of Econometrica 1933-1954". Econometrica . 63 (4): 755–765. doi:10.2307/2171799. ISSN   0012-9682. JSTOR   1906940.
  100. Lange, Oskar (1945). "The Scope and Method of Economics". Review of Economic Studies. 13 (1): 19–32. doi:10.2307/2296113. ISSN   0034-6527. JSTOR   2296113. S2CID   4140287.
  101. Aldrich, John (January 1989). "Autonomy". Oxford Economic Papers. 41 (1, History and Methodology of Econometrics): 15–34. doi:10.1093/oxfordjournals.oep.a041889. ISSN   0030-7653. JSTOR   2663180.
  102. Epstein, Roy J. (1987). A History of Econometrics. Contributions to Economic Analysis. North-Holland. pp. 13–19. ISBN   978-0-444-70267-8. OCLC   230844893.
  103. Colander, David C. (2004). "The Strange Persistence of the IS-LM Model". History of Political Economy. 36 (Annual Supplement): 305–322. CiteSeerX   10.1.1.692.6446 . doi:10.1215/00182702-36-Suppl_1-305. ISSN   0018-2702. S2CID   6705939.
  104. Brems, Hans (October 1975). "Marshall on Mathematics". Journal of Law and Economics. 18 (2): 583–585. doi:10.1086/466825. ISSN   0022-2186. JSTOR   725308. S2CID   154881432.
  105. Frigg, R.; Hartman, S. (February 27, 2006). Edward N. Zalta (ed.). Models in Science. Stanford Encyclopedia of Philosophy. Stanford, California: The Metaphysics Research Lab. ISSN   1095-5054. Archived from the original on 2007-06-09. Retrieved 2008-08-16.
  106. "Greg Mankiw's Blog: An Exercise for My Readers". Archived from the original on 2019-08-07. Retrieved 2019-08-07.
  107. Cochrane, John H. (2017-10-21). "The Grumpy Economist: Greg's algebra". The Grumpy Economist. Archived from the original on 2023-07-01. Retrieved 2019-08-07.
  108. Ekelund, Robert; Hébert, Robert (2014). A History of Economic Theory & Method (6th ed.). Long Grove, IL: Waveland Press. pp. 574–575.
  109. Hayek, Friedrich (September 1945). "The Use of Knowledge in Society". American Economic Review. 35 (4): 519–530. JSTOR   1809376.
  110. Heilbroner, Robert (May–June 1999). "The end of the Dismal Science?". Challenge Magazine. Archived from the original on 2008-12-10.
  111. Beed & Owen, 584
  112. Boland, L. A. (2007). "Seven Decades of Economic Methodology". In I. C. Jarvie; K. Milford; D.W. Miller (eds.). Karl Popper:A Centenary Assessment. London: Ashgate Publishing. p. 219. ISBN   978-0-7546-5375-2 . Retrieved 2008-06-10.
  113. Beed, Clive; Kane, Owen (1991). "What Is the Critique of the Mathematization of Economics?". Kyklos. 44 (4): 581–612. doi:10.1111/j.1467-6435.1991.tb01798.x.
  114. Friedman, Milton (1953). Essays in Positive Economics . Chicago: University of Chicago Press. pp.  30, 33, 41. ISBN   978-0-226-26403-5.
  115. Keynes, John Maynard (1936). The General Theory of Employment, Interest and Money. Cambridge: Macmillan. p. 297. ISBN   978-0-333-10729-4. Archived from the original on 2019-05-28. Retrieved 2009-04-30.
  116. Paul A. Samuelson (1952). "Economic Theory and Mathematics — An Appraisal", American Economic Review, 42(2), pp. 56, 64-65 (press +).
  117. D.W. Bushaw and R.W. Clower (1957). Introduction to Mathematical Economics, p. vii. Archived 2022-03-18 at the Wayback Machine
  118. Solow, Robert M. (20 March 1988). "The Wide, Wide World Of Wealth (The New Palgrave: A Dictionary of Economics. Edited by John Eatwell, Murray Milgate and Peter Newman. Four volumes. 4,103 pp. New York: Stockton Press. $650)". New York Times. Archived from the original on 1 August 2017. Retrieved 11 February 2017.

Further reading