Hedonic index

Last updated

A hedonic index is any price index which uses information from hedonic regression, which describes how product price could be explained by the product's characteristics. Hedonic price indexes have proved to be very useful when applied to calculate price indices for information and communication products (e.g. personal computers) and housing, [1] because they can successfully mitigate problems such as those that arise from there being new goods to consider and from rapid changes of quality.

Contents

Motivation

In the last two decades considerable attention has been drawn to the methods of computing price indexes. The Boskin Commission in 1996 asserted that there were biases in the price index: traditional matched model indexes can substantially overestimate inflation, because they are not able to measure the impact of peculiarities of specific industries such as fast rotation of goods, huge quality differences among products on the market, and short product life cycle. The Commission showed that the usage of matched model indexes (traditional price indexes) leads to an overestimation of inflation by 0.6% per year in the US official CPI (CPI-U). Information and Communications Technology (ICT) products led both to an increase in capital stock and labor productivity growth. [2] Similar results were obtained by Crawford for Canada, [3] by Shiratsuka for Japan, [4] and by Cunningham for the UK. [5] By reversing hedonic methodology, and pending further disclosure from commercial sources, bias has also been enumerated annually over five decades, for the U.S.A. [6]

Quality adjustments are also important for understanding national accounts deflators (see GDP deflator). In the USA, for example, growth acceleration after 1995 was driven by the increased investment in ICT products that lead both to an increase in capital stock and labor productivity growth. [2] This increases the complexity of international comparisons of deflators. Wyckoff [7] and Eurostat [8] show that there is a huge dispersion in ICT deflators in Organisation for Economic Co-operation and Development (OECD) and European countries, accordingly.

These differences are so huge that it cannot be explained by any means of market conditions, regulation, etc. As both studies suggest, most of the discrepancy comes from the differences in quality adjustment procedures across countries and that, in turn, makes international comparison of investment in ICT impossible (as it is calculated through deflation). This also makes it difficult to compare the impact of ICT on economies (countries, regions, etc.) that use different methods to compute GDP numbers.

Hedonic regression

For example, for a linear econometric model, assume that at each period t we have goods, which could be described by a vector of k characteristics . Thus the hedonic (cross-sectional) regression is:

where is a set of coefficients and are independent and identically distributed, having a normal distribution .

Hedonic price index

There are several ways the hedonic price indexes can be constructed. Following Triplett, [9] two methods can be distinguisheddirect and indirect. The direct method uses only information obtained from the hedonic regression, while the second method combines information derived from the hedonic regression and matched models (traditional price indexes). In indirect method, data used for estimating hedonic regression and calculating matched models indexes are different.

The Direct method could be divided into the Time Dummy Variable and Characteristic methods.

Time dummy variable method

The Time Dummy Variable is simpler, because it assumes implicit prices (coefficients of the hedonic regression - ) to be constant over adjacent time periods. This assumption generally does not hold [10] since implicit prices reflect both demand and supply. [11]

Characteristic method

Characteristic method, relaxes this assumption, based on the usage of fitted prices from hedonic regression. This method generally should lead to a more stable estimates, because ordinary least squares (OLS) estimates guarantee that the regression always passes through its mean.

The corresponding characteristic chain hedonic price index looks for period from 0 to T,

and is an estimate of price obtained from hedonic regression at period t+1 with mean characteristics of period .

The corresponding characteristic base hedonic price index looks for period from 0 to T:

A specification of - mean characteristics for the certain period, determines the type of index. For example, if we set equal to the mean of the characteristics for the previous period , we would get a Laspeyres-type index. Setting equal to gives Paasche-type index and so on. The Fisher-type index is defined as a square root of product of Laspeyres- and Paasche-type indexes. The Edgeworth-Marshall index uses the arithmetic mean of mean characteristics of two periods t and t+1. A Walsh-type index uses the geometric average of two periods. And finally, the base quality index does not update characteristics (quality) and uses fixed base characteristics - .

Hedonic quality indexes

Hedonic quality index is similar to quantity index in traditional index theory—it measures how the price of obtaining set of characteristics had changed over time. For example, if we are willing to estimate the effect that characteristic growth (or decline) has had on the price of a computer for one period - from t to t+1, then the hedonic quality index would look like:

where , as in the case with price indexes, determines the type of the index. So, the chain quality index for the period from 0 to T would look like:

and the base index:

See also

Notes

  1. Hill, R. “Hedonic Price Indexes for Housing”, OECD Statistics Working Papers , 2011/01, OECD Publishing.
  2. 1 2 (Bosworth and Triplett, 2001) What's New About the New Economy? IT, Economic Growth and Productivity
  3. Crawford, Allan, ”Measurement biases in the Canadian CPI: An update?”, English and French, Bank of Canada Review, Spring, 1998, pp. 38-56.
  4. Shiratsuka, Shigenori, ”Measurement errors in Japanese Consumer Price Index”, Federal Reserve Bank of Chicago, February 1, 1999
  5. Cunningham, Alastair, ”Measurement Bias in Price Indexes: An Application to the UKs RPI” Archived 2012-08-03 at the Wayback Machine , Bank of England, Publications, Working Papers, 1996, No. 47.
  6. Farrell C.J. 'Commercial Knowledge On Innovation Economics' A Report, p8
  7. Wyckoff, Andrew W., ”The Impact of Computer Prices on International Comparisons of Labour Productivity”, Economics of Innovation and New Technology, 1995, Vol. 3 Issue 3-4, pp. 277-93
  8. Eurostat Task Force, ”Volume Measures for Computers and Software”, June, 1999
  9. Jack Triplett, 2006. Handbook on Hedonic Indexes and Quality Adjustments in Price Indexes: Special Application to Information Technology Products. OECD. Description and Contents.
  10. Berndt, Ernst R. and Neal J. Rappaport (2001), ”Price and Quality of Desktop and Mobile Personal Computers: A Quarter-Century Historical Overview”, American Economic Review, 91(2) (May), pp. 268-273.
  11. Pakes A. (2002), ”A Reconsideration of hedonic price indexes with an application to PC’s”. NBER Working Paper No.8715 (2002), January.

Related Research Articles

In physics, a Langevin equation is a stochastic differential equation describing how a system evolves when subjected to a combination of deterministic and fluctuating ("random") forces. The dependent variables in a Langevin equation typically are collective (macroscopic) variables changing only slowly in comparison to the other (microscopic) variables of the system. The fast (microscopic) variables are responsible for the stochastic nature of the Langevin equation. One application is to Brownian motion, which models the fluctuating motion of a small particle in a fluid.

<span class="mw-page-title-main">Four-momentum</span> 4D relativistic energy and momentum

In special relativity, four-momentum (also called momentum–energy or momenergy ) is the generalization of the classical three-dimensional momentum to four-dimensional spacetime. Momentum is a vector in three dimensions; similarly four-momentum is a four-vector in spacetime. The contravariant four-momentum of a particle with relativistic energy E and three-momentum p = (px, py, pz) = γmv, where v is the particle's three-velocity and γ the Lorentz factor, is

<span class="mw-page-title-main">Minkowski space</span> Spacetime used in theory of relativity

In mathematical physics, Minkowski space combines inertial space and time manifolds (x,y) with a non-inertial reference frame of space and time (x',t') into a four-dimensional model relating a position to the field (physics). A four-vector (x,y,z,t) consisting of coordinate axes such as a Euclidean space plus time may be used with the non-inertial frame to illustrate specifics of motion, but should not be confused with the spacetime model generally. The model helps show how a spacetime interval between any two events is independent of the inertial frame of reference in which they are recorded. Although initially developed by mathematician Hermann Minkowski for Maxwell's equations of electromagnetism, the mathematical structure of Minkowski spacetime was shown to be implied by the postulates of special relativity.

In mathematics, the Dedekind eta function, named after Richard Dedekind, is a modular form of weight 1/2 and is a function defined on the upper half-plane of complex numbers, where the imaginary part is positive. It also occurs in bosonic string theory.

<span class="mw-page-title-main">Theta function</span> Special functions of several complex variables

In mathematics, theta functions are special functions of several complex variables. They show up in many topics, including Abelian varieties, moduli spaces, quadratic forms, and solitons. As Grassmann algebras, they appear in quantum field theory.

Bosonic string theory is the original version of string theory, developed in the late 1960s and named after Satyendra Nath Bose. It is so called because it contains only bosons in the spectrum.

In signal processing, time–frequency analysis comprises those techniques that study a signal in both the time and frequency domains simultaneously, using various time–frequency representations. Rather than viewing a 1-dimensional signal and some transform, time–frequency analysis studies a two-dimensional signal – a function whose domain is the two-dimensional real plane, obtained from the signal via a time–frequency transform.

The representation theory of groups is a part of mathematics which examines how groups act on given structures.

<span class="mw-page-title-main">Stochastic gradient descent</span> Optimization algorithm

Stochastic gradient descent is an iterative method for optimizing an objective function with suitable smoothness properties. It can be regarded as a stochastic approximation of gradient descent optimization, since it replaces the actual gradient by an estimate thereof. Especially in high-dimensional optimization problems this reduces the very high computational burden, achieving faster iterations in exchange for a lower convergence rate.

In physics and fluid mechanics, a Blasius boundary layer describes the steady two-dimensional laminar boundary layer that forms on a semi-infinite plate which is held parallel to a constant unidirectional flow. Falkner and Skan later generalized Blasius' solution to wedge flow, i.e. flows in which the plate is not parallel to the flow.

<span class="mw-page-title-main">Range (aeronautics)</span> Distance an aircraft can fly between takeoff and landing

The maximal total range is the maximum distance an aircraft can fly between takeoff and landing. Powered aircraft range is limited by the aviation fuel energy storage capacity considering both weight and volume limits. Unpowered aircraft range depends on factors such as cross-country speed and environmental conditions. The range can be seen as the cross-country ground speed multiplied by the maximum time in the air. The fuel time limit for powered aircraft is fixed by the available fuel and rate of consumption.

In mathematics, every analytic function can be used for defining a matrix function that maps square matrices with complex entries to square matrices of the same size.

<span class="mw-page-title-main">Errors-in-variables models</span> Regression models accounting for possible errors in independent variables

In statistics, errors-in-variables models or measurement error models are regression models that account for measurement errors in the independent variables. In contrast, standard regression models assume that those regressors have been measured exactly, or observed without error; as such, those models account only for errors in the dependent variables, or responses.

<span class="mw-page-title-main">Radiation stress</span> Term in physical oceanography

In fluid dynamics, the radiation stress is the depth-integrated – and thereafter phase-averaged – excess momentum flux caused by the presence of the surface gravity waves, which is exerted on the mean flow. The radiation stresses behave as a second-order tensor.

In thermal quantum field theory, the Matsubara frequency summation is the summation over discrete imaginary frequencies. It takes the following form

In mathematics, the Weber modular functions are a family of three functions f, f1, and f2, studied by Heinrich Martin Weber.

In mathematics, a Ramanujan–Sato series generalizes Ramanujan’s pi formulas such as,

<span class="mw-page-title-main">Green's law</span> Equation describing evolution of waves in shallow water

In fluid dynamics, Green's law, named for 19th-century British mathematician George Green, is a conservation law describing the evolution of non-breaking, surface gravity waves propagating in shallow water of gradually varying depth and width. In its simplest form, for wavefronts and depth contours parallel to each other, it states:

In combustion, Frank-Kamenetskii theory explains the thermal explosion of a homogeneous mixture of reactants, kept inside a closed vessel with constant temperature walls. It is named after a Russian scientist David A. Frank-Kamenetskii, who along with Nikolay Semenov developed the theory in the 1930s.

Squeeze flow is a type of flow in which a material is pressed out or deformed between two parallel plates or objects. First explored in 1874 by Josef Stefan, squeeze flow describes the outward movement of a droplet of material, its area of contact with the plate surfaces, and the effects of internal and external factors such as temperature, viscoelasticity, and heterogeneity of the material. Several squeeze flow models exist to describe Newtonian and non-Newtonian fluids undergoing squeeze flow under various geometries and conditions. Numerous applications across scientific and engineering disciplines including rheometry, welding engineering, and materials science provide examples of squeeze flow in practical use.

References