LIMDEP

Last updated
LIMDEP
Original author(s) William H. Greene
Developer(s) Econometric Software, Inc.
Stable release
11 / September 7, 2016
Operating system Windows
Type statistical analysis econometric analysis
License proprietary software
Website www.limdep.com   OOjs UI icon edit-ltr-progressive.svg

LIMDEP is an econometric and statistical software package with a variety of estimation tools. In addition to the core econometric tools for analysis of cross sections and time series, LIMDEP supports methods for panel data analysis, frontier and efficiency estimation and discrete choice modeling. The package also provides a programming language to allow the user to specify, estimate and analyze models that are not contained in the built in menus of model forms.

Contents

History

LIMDEP was first developed in the early 1980s. Econometric Software, Inc. was founded in 1985 by William H. Greene. The program was initially developed as an easy to use tobit estimator—hence the name, LIMited DEPendent variable models. [1] Econometric Software has continually expanded since the early 1980s and currently has locations in the United States and Australia.

The ongoing development of LIMDEP has been based partly on interaction and feedback from users and from the collaboration of many researchers. LIMDEP is used by researchers in universities, government institutions, and businesses.

LIMDEP has spun off a major suite of programs for the estimation of discrete choice models, NLOGIT, now a self standing superset of LIMDEP.

User interface

The main functionality of the program is accessed through a command line interface. Command streams are provided to the program via scripts or as text processed in a text editing format. It also includes a graphical user interface within which all program features can be accessed via menus or command generating dialog boxes. All GUI command generators produce transportable scripts that can be reused and modified in the command editor. [2]

Data input, formats and storage

Any number of data sets may be analyzed simultaneously. Data are input via standard ASCII formats such as CSV, DIF and rectangular ASCII, as well as XLS, Stata DTA (some versions) and binary. Data may be exported in CSV, rectangular ASCII and binary formats. The native save format (LPJ) has not changed since the release of the Windows version in 1997. All versions may exchange data sets. Data storage and all computations are always in double precision. Active data set size limitation is imposed by the available memory. [2]

List server

LIMDEP supports a list server based discussion group. [3] Anyone (users and interested nonusers) may subscribe to the list server. The list server is maintained at the University of Sydney.

Models

There are model formulations for linear and nonlinear regression, robust estimation, discrete choice (including binary choice, ordered choice and unordered multinomial choice), censoring and truncation, sample selection, loglinear models, survival analysis, quantile regression (linear and count), panel data, stochastic frontier and data envelopment analysis, count data, and time series. [1] [2]

Data Analysis

Analysis of a data set is done interactively in a set of windows. Program control may be from a pull down menu or in an unstructured session of instructions and manipulations. Estimation involves:

Resources

The PDF documentation set includes reference guides for the operation, background econometrics, and sample applications. [2]

See also

Notes

  1. 1 2 3 4 5 6 7 Hilbe, Joseph (2006). "A Review of LIMDEP 9.0 and NLOGIT 4.0". The American Statistician. 60 (2): 187–202. doi:10.1198/000313006x110492.
  2. 1 2 3 4 5 McKenzie, Colin; Takaoka, Sumiko (2003). "2002: A LIMDEP Odyssey". Journal of Applied Econometrics. 18 (2): 241–247. doi:10.1002/jae.705.
  3. List Server
  4. Odeh, Oluwarotimi; Allen Featherstone; Jason Bergtold (2010). "Reliability of Statistical Software". American Journal of Agricultural Economics. 92 (5): 1472–1489. doi:10.1093/ajae/aaq068.
  5. McCullough, B.D. (1999). "Econometric software reliability: EViews, LIMDEP, SHAZAM and TSP". Journal of Applied Econometrics. 14 (2): 191–202. doi:10.1002/(SICI)1099-1255(199903/04)14:2<191::AID-JAE524>3.0.CO;2-K.

Related Research Articles

<span class="mw-page-title-main">Econometrics</span> Empirical statistical testing of economic theories

Econometrics is an application of statistical methods to economic data in order to give empirical content to economic relationships. More precisely, it is "the quantitative analysis of actual economic phenomena based on the concurrent development of theory and observation, related by appropriate methods of inference." An introductory economics textbook describes econometrics as allowing economists "to sift through mountains of data to extract simple relationships." Jan Tinbergen is one of the two founding fathers of econometrics. The other, Ragnar Frisch, also coined the term in the sense in which it is used today.

<span class="mw-page-title-main">SPSS</span> Statistical analysis software

SPSS Statistics is a statistical software suite developed by IBM for data management, advanced analytics, multivariate analysis, business intelligence, and criminal investigation. Long produced by SPSS Inc., it was acquired by IBM in 2009. Versions of the software released since 2015 have the brand name IBM SPSS Statistics.

<span class="mw-page-title-main">Conjoint analysis</span> Survey-based statistical technique

Conjoint analysis is a survey-based statistical technique used in market research that helps determine how people value different attributes that make up an individual product or service.

<span class="mw-page-title-main">Comma-separated values</span> File format used to store data

Comma-separated values (CSV) is a text file format that uses commas to separate values, and newlines to separate records. A CSV file stores tabular data in plain text, where each line of the file typically represents one data record. Each record consists of the same number of fields, and these are separated by commas in the CSV file. If the field delimiter itself may appear within a field, fields can be surrounded with quotation marks.

gretl

gretl is an open-source statistical package, mainly for econometrics. The name is an acronym for GnuRegression, Econometrics and Time-seriesLibrary.

<span class="mw-page-title-main">Stata</span> Statistical software package

Stata is a general-purpose statistical software package developed by StataCorp for data manipulation, visualization, statistics, and automated reporting. It is used by researchers in many fields, including biomedicine, economics, epidemiology, and sociology.

<span class="mw-page-title-main">Kernel density estimation</span> Estimator

In statistics, kernel density estimation (KDE) is the application of kernel smoothing for probability density estimation, i.e., a non-parametric method to estimate the probability density function of a random variable based on kernels as weights. KDE answers a fundamental data smoothing problem where inferences about the population are made based on a finite data sample. In some fields such as signal processing and econometrics it is also termed the Parzen–Rosenblatt window method, after Emanuel Parzen and Murray Rosenblatt, who are usually credited with independently creating it in its current form. One of the famous applications of kernel density estimation is in estimating the class-conditional marginal densities of data when using a naive Bayes classifier, which can improve its prediction accuracy.

RATS, an abbreviation of Regression Analysis of Time Series, is a statistical package for time series analysis and econometrics. RATS is developed and sold by Estima, Inc., located in Evanston, IL.

The topic of heteroskedasticity-consistent (HC) standard errors arises in statistics and econometrics in the context of linear regression and time series analysis. These are also known as heteroskedasticity-robust standard errors, Eicker–Huber–White standard errors, to recognize the contributions of Friedhelm Eicker, Peter J. Huber, and Halbert White.

<span class="mw-page-title-main">PSPP</span> Data analysis software

PSPP is a free software application for analysis of sampled data, intended as a free alternative for IBM SPSS Statistics. It has a graphical user interface and conventional command-line interface. It is written in C and uses GNU Scientific Library for its mathematical routines. The name has "no official acronymic expansion".

The Heckman correction is a statistical technique to correct bias from non-randomly selected samples or otherwise incidentally truncated dependent variables, a pervasive issue in quantitative social sciences when using observational data. Conceptually, this is achieved by explicitly modelling the individual sampling probability of each observation together with the conditional expectation of the dependent variable. The resulting likelihood function is mathematically similar to the tobit model for censored dependent variables, a connection first drawn by James Heckman in 1974. Heckman also developed a two-step control function approach to estimate this model, which avoids the computational burden of having to estimate both equations jointly, albeit at the cost of inefficiency. Heckman received the Nobel Memorial Prize in Economic Sciences in 2000 for his work in this field.

Psychometric software refers to specialized programs used for the psychometric analysis of data obtained from tests, questionnaires, polls or inventories that measure latent psychoeducational variables. Although some psychometric analyses can be performed using general statistical software such as SPSS, most require specialized tools designed specifically for psychometric purposes.

A Newey–West estimator is used in statistics and econometrics to provide an estimate of the covariance matrix of the parameters of a regression-type model where the standard assumptions of regression analysis do not apply. It was devised by Whitney K. Newey and Kenneth D. West in 1987, although there are a number of later variants. The estimator is used to try to overcome autocorrelation, and heteroskedasticity in the error terms in the models, often for regressions applied to time series data. The abbreviation "HAC," sometimes used for the estimator, stands for "heteroskedasticity and autocorrelation consistent." There are a number of HAC estimators described in, and HAC estimator does not refer uniquely to Newey–West. One version of Newey–West Bartlett requires the user to specify the bandwidth and usage of the Bartlett kernel from Kernel density estimation

Shazam is a comprehensive econometrics and statistics package for estimating, testing, simulating and forecasting many types of econometrics and statistical models. SHAZAM was originally created in 1977 by Kenneth White.

<span class="mw-page-title-main">NLOGIT</span>

NLOGIT is an extension of the econometric and statistical software package LIMDEP. In addition to the estimation tools in LIMDEP, NLOGIT provides programs for estimation, model simulation and analysis of multinomial choice data, such as brand choice, transportation mode and for survey and market data in which consumers choose among a set of competing alternatives.

In econometrics, the Arellano–Bond estimator is a generalized method of moments estimator used to estimate dynamic models of panel data. It was proposed in 1991 by Manuel Arellano and Stephen Bond, based on the earlier work by Alok Bhargava and John Denis Sargan in 1983, for addressing certain endogeneity problems. The GMM-SYS estimator is a system that contains both the levels and the first difference equations. It provides an alternative to the standard first difference GMM estimator.

Control functions are statistical methods to correct for endogeneity problems by modelling the endogeneity in the error term. The approach thereby differs in important ways from other models that try to account for the same econometric problem. Instrumental variables, for example, attempt to model the endogenous variable X as an often invertible model with respect to a relevant and exogenous instrument Z. Panel analysis uses special data properties to difference out unobserved heterogeneity that is assumed to be fixed over time.

<span class="mw-page-title-main">Homoscedasticity and heteroscedasticity</span> Statistical property

In statistics, a sequence of random variables is homoscedastic if all its random variables have the same finite variance; this is also known as homogeneity of variance. The complementary notion is called heteroscedasticity, also known as heterogeneity of variance. The spellings homoskedasticity and heteroskedasticity are also frequently used. “Skedasticity” comes from the Ancient Greek word “skedánnymi”, meaning “to scatter”. Assuming a variable is homoscedastic when in reality it is heteroscedastic results in unbiased but inefficient point estimates and in biased estimates of standard errors, and may result in overestimating the goodness of fit as measured by the Pearson coefficient.

References