Computable general equilibrium (CGE) models are a class of economic models that use actual economic data to estimate how an economy might react to changes in policy, technology or other external factors. CGE models are also referred to as AGE (applied general equilibrium) models. A CGE model consists of equations describing model variables and a database (usually very detailed) consistent with these model equations. The equations tend to be neoclassical in spirit, often assuming cost-minimizing behaviour by producers, average-cost pricing, and household demands based on optimizing behaviour.
CGE models are useful whenever we wish to estimate the effect of changes in one part of the economy upon the rest. They have been used widely to analyse trade policy. More recently, CGE has been a popular way to estimate the economic effects of measures to reduce greenhouse gas emissions.
A CGE model consists of equations describing model variables and a database (usually very detailed) consistent with these model equations. The equations tend to be neoclassical in spirit, often assuming cost-minimizing behaviour by producers, average-cost pricing, and household demands based on optimizing behaviour. However, most CGE models conform only loosely to the theoretical general equilibrium paradigm. For example, they may allow for:
CGE models always contain more variables than equations—so some variables must be set outside the model. These variables are termed exogenous; the remainder, determined by the model, is called endogenous. The choice of which variables are to be exogenous is called the model closure, and may give rise to controversy. For example, some modelers hold employment and the trade balance fixed; others allow these to vary. Variables defining technology, consumer tastes, and government instruments (such as tax rates) are usually exogenous.
A CGE model database consists of:
CGE models are descended from the input–output models pioneered by Wassily Leontief, but assign a more important role to prices. Thus, where Leontief assumed that, say, a fixed amount of labour was required to produce a ton of iron, a CGE model would normally allow wage levels to (negatively) affect labour demands.
CGE models derive too from the models for planning the economies of poorer countries constructed (usually by a foreign expert) from 1960 onwards. [2] [3] Compared to the Leontief model, development planning models focused more on constraints or shortages—of skilled labour, capital, or foreign exchange.
CGE modelling of richer economies descends from Leif Johansen's 1960 [4] MSG model of Norway, and the static model developed by the Cambridge Growth Project [5] in the UK. Both models were pragmatic in flavour, and traced variables through time. The Australian MONASH model [6] is a modern representative of this class. Perhaps the first CGE model similar to those of today was that of Taylor and Black (1974). [7]
CGE models are useful whenever we wish to estimate the effect of changes in one part of the economy upon the rest. For example, a tax on flour might affect bread prices, the CPI, and hence perhaps wages and employment. They have been used widely to analyse trade policy. More recently, CGE has been a popular way to estimate the economic effects of measures to reduce greenhouse gas emissions.
CGE models have been used widely to analyse trade policy. Today there are many CGE models of different countries. One of the most well-known CGE models is global: the GTAP [8] model of world trade.
CGE models are useful to model the economies of countries for which time series data are scarce or not relevant (perhaps because of disturbances such as regime changes). Here, strong, reasonable, assumptions embedded in the model must replace historical evidence. Thus developing economies are often analysed using CGE models, such as those based on the IFPRI template model. [9]
CGE models can specify consumer and producer behaviour and ‘simulate’ effects of climate policy on various economic outcomes. They can show economic gains and losses across different groups (e.g., households that differ in income, or in different regions). The equations include assumptions about the behavioural response of different groups. By optimising the prices paid for various outputs the direct burdens are shifted from one taxpayer to another. [10]
Many CGE models are comparative static: they model the reactions of the economy at only one point in time. For policy analysis, results from such a model are often interpreted as showing the reaction of the economy in some future period to one or a few external shocks or policy changes. That is, the results show the difference (usually reported in percent change form) between two alternative future states (with and without the policy shock). The process of adjustment to the new equilibrium, in particular the reallocation of labor and capital across sectors, usually is not explicitly represented in such a model.
In contrast, long-run models focus on adjustments to the underlying resource base when modeling policy changes. This can include dynamic adjustment to the labor supply, adjustments in installed and overall capital stocks, and even adjustment to overall productivity and market structure. There are two broad approaches followed in the policy literature to such long-run adjustment. One involves what is called "comparative steady state" analysis. Under such an approach, long-run or steady-state closure rules are used, under either forward-looking or recursive dynamic behavior, to solve for long-run adjustments. [11]
The alternative approach involves explicit modeling of dynamic adjustment paths. These models can seem more realistic, but are more challenging to construct and solve. They require for instance that future changes are predicted for all exogenous variables, not just those affected by a possible policy change. The dynamic elements may arise from partial adjustment processes or from stock/flow accumulation relations: between capital stocks and investment, and between foreign debt and trade deficits. However, there is a potential consistency problem because the variables that change from one equilibrium solution to the next are not necessarily consistent with each other during the period of change. The modeling of the path of adjustment may involve forward-looking expectations, [12] where agents' expectations depend on the future state of the economy and it is necessary to solve for all periods simultaneously, leading to full multi-period dynamic CGE models. An alternative is recursive dynamics. Recursive-dynamic CGE models are those that can be solved sequentially (one period at a time). They assume that behaviour depends only on current and past states of the economy. Recursive dynamic models where a single period is solved for, comparative steady-state analysis, is a special case of recursive dynamic modeling over what can be multiple periods.
CGE models typically involve numerous types of goods and economic agents; therefore, we usually express various economic variables and formulas in the form of vectors and matrices. This not only makes the formulas more concise and clear but also facilitates the use of analytical tools from linear algebra and matrix theory. The John von Neumann's general equilibrium model and the structural equilibrium model are examples of matrix-form CGE models, which can be viewed as generalizations of eigenequations.
The eigenequations of a square matrix are as follows:
where and are the left and right eigenvectors of the square matrix , respectively, and is the eigenvalue.
The above eigenequations for the square matrix can be extended to the von Neumann general equilibrium model: [13] [14]
where the economic meanings of and are the equilibrium prices of various goods and the equilibrium activity levels of various economic agents, respectively.
We can further extend the von Neumann general equilibrium model to the following structural equilibrium model with and as matrix-valued functions: [15]
where the economic meaning of is the utility levels of various consumers. These two formulas respectively reflect the income-expenditure balance condition and the supply-demand balance condition in the equilibrium state. The structural equilibrium model can be solved using the GE package in R.
Below, we illustrate the above structural equilibrium model through a linear programming example, [16] with the following assumptions:
(1) There are 3 types of primary factors, with quantities given by . These 3 primary factors can be used to produce a type of product.
(2) There are 3 firms in the economy, each using different technologies to produce the same product. The quantities of the 3 factors required by each of the 3 firms for one day of production are shown in the columns of the following input coefficient matrix:
(3) The output from each of the 3 firms for one day of production can be represented by the vector 。
We need to find the optimal numbers of production days for the three firms, which maximize total output. By solving the above linear programming problem, the optimal numbers of production days for the three firms are found to be 2, 0, and 8, respectively; and the corresponding total output is 280.
Next, we transform this linear programming problem into a general equilibrium problem, with the following assumptions:
(1) There are 4 types of goods in the economy (i.e., the product and 3 primary factors) and 4 economic agents (i.e., 3 firms and 1 consumer).
(2) Firms use primary factors as inputs to produce the product. The input and output for one day of production are shown in the first 3 columns of the unit input matrix and the unit output matrix, respectively:
(3) The consumer demands only the product, as shown in the 4th column of , where represents the utility level (i.e., the amount of the product consumed).
(4) The consumer supplies the 3 primary factors, as shown in the 4th column of .
We can express the CGE model using the following structural equilibrium model:
wherein is the price vector, with the product used as the numeraire; is the activity level vector, composed of the production levels (i.e., days of production here) of firms and the number of consumers.
The results obtained by solving this structural equilibrium model are the same as those from the optimization approach:
Substituting the above calculation results into the structural equilibrium model, we obtain
Early CGE models were often solved by a program custom-written for that particular model. Models were expensive to construct and sometimes appeared as a 'black box' to outsiders. Now, most CGE models are formulated and solved using one of the GAMS or GEMPACK software systems. AMPL, [17] Excel and MATLAB are also used. Use of such systems has lowered the cost of entry to CGE modelling; allowed model simulations to be independently replicated; and increased the transparency of the models.
In physics, the Lorentz transformations are a six-parameter family of linear transformations from a coordinate frame in spacetime to another frame that moves at a constant velocity relative to the former. The respective inverse transformation is then parameterized by the negative of this velocity. The transformations are named after the Dutch physicist Hendrik Lorentz.
Linear programming (LP), also called linear optimization, is a method to achieve the best outcome in a mathematical model whose requirements and objective are represented by linear relationships. Linear programming is a special case of mathematical programming.
In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables, each of which clusters around a mean value.
In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix into a rotation, followed by a rescaling followed by another rotation. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any matrix. It is related to the polar decomposition.
In probability theory and statistics, a covariance matrix is a square matrix giving the covariance between each pair of elements of a given random vector.
In vector calculus, the Jacobian matrix of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variables as input as the number of vector components of its output, its determinant is referred to as the Jacobian determinant. Both the matrix and the determinant are often referred to simply as the Jacobian in literature. They are named after Carl Gustav Jacob Jacobi.
Simultaneous equations models are a type of statistical model in which the dependent variables are functions of other dependent variables, rather than just independent variables. This means some of the explanatory variables are jointly determined with the dependent variable, which in economics usually is the consequence of some underlying equilibrium mechanism. Take the typical supply and demand model: whilst typically one would determine the quantity supplied and demanded to be a function of the price set by the market, it is also possible for the reverse to be true, where producers observe the quantity that consumers demand and then set the price.
Linear elasticity is a mathematical model as to how solid objects deform and become internally stressed by prescribed loading conditions. It is a simplification of the more general nonlinear theory of elasticity and a branch of continuum mechanics.
In linear algebra and functional analysis, a projection is a linear transformation from a vector space to itself such that . That is, whenever is applied twice to any vector, it gives the same result as if it were applied once. It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object.
In control engineering and system identification, a state-space representation is a mathematical model of a physical system specified as a set of input, output, and variables related by first-order differential equations or difference equations. Such variables, called state variables, evolve over time in a way that depends on the values they have at any given instant and on the externally imposed values of input variables. Output variables’ values depend on the state variable values and may also depend on the input variable values.
In statistics, propagation of uncertainty is the effect of variables' uncertainties on the uncertainty of a function based on them. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations which propagate due to the combination of variables in the function.
In applied statistics, total least squares is a type of errors-in-variables regression, a least squares data modeling technique in which observational errors on both dependent and independent variables are taken into account. It is a generalization of Deming regression and also of orthogonal regression, and can be applied to both linear and non-linear models.
In economics, an input–output model is a quantitative economic model that represents the interdependencies between different sectors of a national economy or different regional economies. Wassily Leontief (1906–1999) is credited with developing this type of analysis and earned the Nobel Prize in Economics for his development of this model.
The Ramsey–Cass–Koopmans model, or Ramsey growth model, is a neoclassical model of economic growth based primarily on the work of Frank P. Ramsey, with significant extensions by David Cass and Tjalling Koopmans. The Ramsey–Cass–Koopmans model differs from the Solow–Swan model in that the choice of consumption is explicitly microfounded at a point in time and so endogenizes the savings rate. As a result, unlike in the Solow–Swan model, the saving rate may not be constant along the transition to the long run steady state. Another implication of the model is that the outcome is Pareto optimal or Pareto efficient.
GTAP is a global network of researchers who conduct quantitative analysis of international economic policy issues, including trade policy, climate policy, and globalization linkages to inequality and employment. The consortium produces a consistent global economic database which is widely used in the research community to study prospective international economic policy around these issues.
In statistics, Bayesian multivariate linear regression is a Bayesian approach to multivariate linear regression, i.e. linear regression where the predicted outcome is a vector of correlated random variables rather than a single scalar random variable. A more general treatment of this approach can be found in the article MMSE estimator.
In probability theory and statistics, partial correlation measures the degree of association between two random variables, with the effect of a set of controlling random variables removed. When determining the numerical relationship between two variables of interest, using their correlation coefficient will give misleading results if there is another confounding variable that is numerically related to both variables of interest. This misleading information can be avoided by controlling for the confounding variable, which is done by computing the partial correlation coefficient. This is precisely the motivation for including other right-side variables in a multiple regression; but while multiple regression gives unbiased results for the effect size, it does not give a numerical value of a measure of the strength of the relationship between the two variables of interest.
The dual of a given linear program (LP) is another LP that is derived from the original LP in the following schematic way:
In econometrics, the Arellano–Bond estimator is a generalized method of moments estimator used to estimate dynamic models of panel data. It was proposed in 1991 by Manuel Arellano and Stephen Bond, based on the earlier work by Alok Bhargava and John Denis Sargan in 1983, for addressing certain endogeneity problems. The GMM-SYS estimator is a system that contains both the levels and the first difference equations. It provides an alternative to the standard first difference GMM estimator.
Dynamic Substructuring (DS) is an engineering tool used to model and analyse the dynamics of mechanical systems by means of its components or substructures. Using the dynamic substructuring approach one is able to analyse the dynamic behaviour of substructures separately and to later on calculate the assembled dynamics using coupling procedures. Dynamic substructuring has several advantages over the analysis of the fully assembled system: