LIMDEP

Last updated
LIMDEP
LIMDEP logo.png
LIMDEP screenshot.png
LIMDEP 10 on Microsoft Windows
Original author(s) William H. Greene
Developer(s) Econometric Software, Inc.
Stable release
11 / September 7, 2016
Operating system Windows
Type statistical analysis econometric analysis
License proprietary software
Website www.limdep.com

LIMDEP is an econometric and statistical software package with a variety of estimation tools. In addition to the core econometric tools for analysis of cross sections and time series, LIMDEP supports methods for panel data analysis, frontier and efficiency estimation and discrete choice modeling. The package also provides a programming language to allow the user to specify, estimate and analyze models that are not contained in the built in menus of model forms.

Contents

History

LIMDEP was first developed in the early 1980s. Econometric Software, Inc. was founded in 1985 by William H. Greene. The program was initially developed as an easy to use tobit estimator—hence the name, LIMited DEPendent variable models. [1] Econometric Software has continually expanded since the early 1980s and currently has locations in the United States and Australia.

The ongoing development of LIMDEP has been based partly on interaction and feedback from users and from the collaboration of many researchers. LIMDEP is used by researchers in universities, government institutions, and businesses.

LIMDEP has spun off a major suite of programs for the estimation of discrete choice models, NLOGIT, now a self standing superset of LIMDEP.

User interface

The main functionality of the program is accessed through a command line interface. Command streams are provided to the program via scripts or as text processed in a text editing format. It also includes a graphical user interface within which all program features can be accessed via menus or command generating dialog boxes. All GUI command generators produce transportable scripts that can be reused and modified in the command editor. [2]

Data input, formats and storage

Any number of data sets may be analyzed simultaneously. Data are input via standard ASCII formats such as CSV, DIF and rectangular ASCII, as well as XLS, Stata DTA (some versions) and binary. Data may be exported in CSV, rectangular ASCII and binary formats. The native save format (LPJ) has not changed since the release of the Windows version in 1997. All versions may exchange data sets. Data storage and all computations are always in double precision. Active data set size limitation is imposed by the available memory. [2]

List server

LIMDEP supports a list server based discussion group. [3] Anyone (users and interested nonusers) may subscribe to the list server. The list server is maintained at the University of Sydney.

Models

There are model formulations for linear and nonlinear regression, robust estimation, discrete choice (including binary choice, ordered choice and unordered multinomial choice), censoring and truncation, sample selection, loglinear models, survival analysis, quantile regression (linear and count), panel data, stochastic frontier and data envelopment analysis, count data, and time series. [1] [2]

Data Analysis

Analysis of a data set is done interactively in a set of windows. Program control may be from a pull down menu or in an unstructured session of instructions and manipulations. Estimation involves:

Resources

The PDF documentation set includes reference guides for the operation, background econometrics, and sample applications. [2]

See also

Notes

  1. 1 2 3 4 5 6 7 Hilbe, Joseph (2006). "A Review of LIMDEP 9.0 and NLOGIT 4.0". The American Statistician. 60 (2): 187–202. doi:10.1198/000313006x110492.
  2. 1 2 3 4 5 McKenzie, Colin; Takaoka, Sumiko (2003). "2002: A LIMDEP Odyssey". Journal of Applied Econometrics. 18 (2): 241–247. doi:10.1002/jae.705.
  3. List Server
  4. Odeh, Oluwarotimi; Allen Featherstone; Jason Bergtold (2010). "Reliability of Statistical Software". American Journal of Agricultural Economics. 92 (5): 1472–1489. doi:10.1093/ajae/aaq068.
  5. McCullough, B.D. (1999). "Econometric software reliability: EViews, LIMDEP, SHAZAM and TSP". Journal of Applied Econometrics. 14 (2): 191–202. doi:10.1002/(SICI)1099-1255(199903/04)14:2<191::AID-JAE524>3.0.CO;2-K.

Related Research Articles

Econometrics is the application of statistical methods to economic data in order to give empirical content to economic relationships. More precisely, it is "the quantitative analysis of actual economic phenomena based on the concurrent development of theory and observation, related by appropriate methods of inference". An introductory economics textbook describes econometrics as allowing economists "to sift through mountains of data to extract simple relationships". Jan Tinbergen is one of the two founding fathers of econometrics. The other, Ragnar Frisch, also coined the term in the sense in which it is used today.

SPSS Statistical analysis software

SPSS Statistics is a statistical software suite developed by IBM for data management, advanced analytics, multivariate analysis, business intelligence, criminal investigation. Long produced by SPSS Inc., it was acquired by IBM in 2009. Current versions have the brand name: IBM SPSS Statistics.

Conjoint analysis

Conjoint analysis is a survey-based statistical technique used in market research that helps determine how people value different attributes that make up an individual product or service.

SUDAAN is a proprietary statistical software package for the analysis of correlated data, including correlated data encountered in complex sample surveys. SUDAAN originated in 1972 at RTI International. Individual commercial licenses are sold for $1,460 a year, or $3,450 permanently.

gretl

gretl is an open-source statistical package, mainly for econometrics. The name is an acronym for GnuRegression, Econometrics and Time-seriesLibrary.

Stata Statistical software package

Stata is a general-purpose statistical software package developed by StataCorp for data manipulation, visualization, statistics, and automated reporting. It is used by researchers in many fields, including biomedicine, epidemiology, sociology and science.

Kernel density estimation

In statistics, kernel density estimation (KDE) is a non-parametric way to estimate the probability density function of a random variable. Kernel density estimation is a fundamental data smoothing problem where inferences about the population are made, based on a finite data sample. In some fields such as signal processing and econometrics it is also termed the Parzen–Rosenblatt window method, after Emanuel Parzen and Murray Rosenblatt, who are usually credited with independently creating it in its current form. One of the famous applications of kernel density estimation is in estimating the class-conditional marginal densities of data when using a naive Bayes classifier, which can improve its prediction accuracy.

In econometrics, the seemingly unrelated regressions (SUR) or seemingly unrelated regression equations (SURE) model, proposed by Arnold Zellner in (1962), is a generalization of a linear regression model that consists of several regression equations, each having its own dependent variable and potentially different sets of exogenous explanatory variables. Each equation is a valid linear regression on its own and can be estimated separately, which is why the system is called seemingly unrelated, although some authors suggest that the term seemingly related would be more appropriate, since the error terms are assumed to be correlated across the equations.

RATS, an abbreviation of Regression Analysis of Time Series, is a statistical package for time series analysis and econometrics. RATS is developed and sold by Estima, Inc., located in Evanston, IL.

FlexPro

FlexPro is a software package for analysis and presentation of scientific and technical data, produced by Weisang GmbH. It runs on Microsoft Windows and is available in English, German, Japanese, Chinese and French. FlexPro has its roots in the test and measurement domain and supports different binary file formats of data acquisition instruments and software. In particular, FlexPro can analyze large amounts of data with high sampling rates.

The topic of heteroskedasticity-consistent (HC) standard errors arises in statistics and econometrics in the context of linear regression and time series analysis. These are also known as heteroskedasticity-robust standard errors, Eicker–Huber–White standard errors, to recognize the contributions of Friedhelm Eicker, Peter J. Huber, and Halbert White.

PSPP is a free software application for analysis of sampled data, intended as a free alternative for IBM SPSS Statistics. It has a graphical user interface and conventional command-line interface. It is written in C and uses GNU Scientific Library for its mathematical routines. The name has "no official acronymic expansion".

The Heckman correction is a statistical technique to correct bias from non-randomly selected samples or otherwise incidentally truncated dependent variables, a pervasive issue in quantitative social sciences when using observational data. Conceptually, this is achieved by explicitly modelling the individual sampling probability of each observation together with the conditional expectation of the dependent variable. The resulting likelihood function is mathematically similar to the tobit model for censored dependent variables, a connection first drawn by James Heckman in 1974. Heckman also developed a two-step control function approach to estimate this model, which avoids the computational burden of having to estimate both equations jointly, albeit at the cost of inefficiency. Heckman received the Nobel Memorial Prize in Economic Sciences in 2000 for his work in this field.

Psychometric software is software that is used for psychometric analysis of data from tests, questionnaires, or inventories reflecting latent psychoeducational variables. While some psychometric analyses can be performed with standard statistical software like SPSS, most analyses require specialized tools.

A Newey–West estimator is used in statistics and econometrics to provide an estimate of the covariance matrix of the parameters of a regression-type model when this model is applied in situations where the standard assumptions of regression analysis do not apply. It was devised by Whitney K. Newey and Kenneth D. West in 1987, although there are a number of later variants. The estimator is used to try to overcome autocorrelation, and heteroskedasticity in the error terms in the models, often for regressions applied to time series data. The abbreviation "HAC," sometimes used for the estimator, stands for "heteroskedasticity and autocorrelation consistent."

SHAZAM is a comprehensive econometrics and statistics package for estimating, testing, simulating and forecasting many types of econometrics and statistical models. SHAZAM was originally created in 1977 by Kenneth White.

NLOGIT

NLOGIT is an extension of the econometric and statistical software package LIMDEP. In addition to the estimation tools in LIMDEP, NLOGIT provides programs for estimation, model simulation and analysis of multinomial choice data, such as brand choice, transportation mode and for survey and market data in which consumers choose among a set of competing alternatives.

References