Original author(s) | William H. Greene |
---|---|
Developer(s) | Econometric Software, Inc. |
Stable release | 11 / September 7, 2016 |
Operating system | Windows |
Type | statistical analysis econometric analysis |
License | proprietary software |
Website | www |
LIMDEP is an econometric and statistical software package with a variety of estimation tools. In addition to the core econometric tools for analysis of cross sections and time series, LIMDEP supports methods for panel data analysis, frontier and efficiency estimation and discrete choice modeling. The package also provides a programming language to allow the user to specify, estimate and analyze models that are not contained in the built in menus of model forms.
LIMDEP was first developed in the early 1980s. Econometric Software, Inc. was founded in 1985 by William H. Greene. The program was initially developed as an easy to use tobit estimator—hence the name, LIMited DEPendent variable models. [1] Econometric Software has continually expanded since the early 1980s and currently has locations in the United States and Australia.
The ongoing development of LIMDEP has been based partly on interaction and feedback from users and from the collaboration of many researchers. LIMDEP is used by researchers in universities, government institutions, and businesses.
LIMDEP has spun off a major suite of programs for the estimation of discrete choice models, NLOGIT, now a self standing superset of LIMDEP.
As of November 2024, the LIMDEP website states that "After 35 years of developing and providing pioneering tools for microeconometric analysis, Econometric Software, Inc. is closing its operations."
The main functionality of the program is accessed through a command line interface. Command streams are provided to the program via scripts or as text processed in a text editing format. It also includes a graphical user interface within which all program features can be accessed via menus or command generating dialog boxes. All GUI command generators produce transportable scripts that can be reused and modified in the command editor. [2]
Any number of data sets may be analyzed simultaneously. Data are input via standard ASCII formats such as CSV, DIF and rectangular ASCII, as well as XLS, Stata DTA (some versions) and binary. Data may be exported in CSV, rectangular ASCII and binary formats. The native save format (LPJ) has not changed since the release of the Windows version in 1997. All versions may exchange data sets. Data storage and all computations are always in double precision. Active data set size limitation is imposed by the available memory. [2]
LIMDEP supports a list server based discussion group. [3] Anyone (users and interested nonusers) may subscribe to the list server. The list server is maintained at the University of Sydney.
There are model formulations for linear and nonlinear regression, robust estimation, discrete choice (including binary choice, ordered choice and unordered multinomial choice), censoring and truncation, sample selection, loglinear models, survival analysis, quantile regression (linear and count), panel data, stochastic frontier and data envelopment analysis, count data, and time series. [1] [2]
Analysis of a data set is done interactively in a set of windows. Program control may be from a pull down menu or in an unstructured session of instructions and manipulations. Estimation involves:
The PDF documentation set includes reference guides for the operation, background econometrics, and sample applications. [2]
Econometrics is an application of statistical methods to economic data in order to give empirical content to economic relationships. More precisely, it is "the quantitative analysis of actual economic phenomena based on the concurrent development of theory and observation, related by appropriate methods of inference." An introductory economics textbook describes econometrics as allowing economists "to sift through mountains of data to extract simple relationships." Jan Tinbergen is one of the two founding fathers of econometrics. The other, Ragnar Frisch, also coined the term in the sense in which it is used today.
SPSS Statistics is a statistical software suite developed by IBM for data management, advanced analytics, multivariate analysis, business intelligence, and criminal investigation. Long produced by SPSS Inc., it was acquired by IBM in 2009. Versions of the software released since 2015 have the brand name IBM SPSS Statistics.
Conjoint analysis is a survey-based statistical technique used in market research that helps determine how people value different attributes that make up an individual product or service.
Comma-separated values (CSV) is a text file format that uses commas to separate values, and newlines to separate records. A CSV file stores tabular data in plain text, where each line of the file typically represents one data record. Each record consists of the same number of fields, and these are separated by commas in the CSV file. If the field delimiter itself may appear within a field, fields can be surrounded with quotation marks.
gretl is an open-source statistical package, mainly for econometrics. The name is an acronym for GnuRegression, Econometrics and Time-seriesLibrary.
Stata is a general-purpose statistical software package developed by StataCorp for data manipulation, visualization, statistics, and automated reporting. It is used by researchers in many fields, including biomedicine, economics, epidemiology, and sociology.
In statistics, kernel density estimation (KDE) is the application of kernel smoothing for probability density estimation, i.e., a non-parametric method to estimate the probability density function of a random variable based on kernels as weights. KDE answers a fundamental data smoothing problem where inferences about the population are made based on a finite data sample. In some fields such as signal processing and econometrics it is also termed the Parzen–Rosenblatt window method, after Emanuel Parzen and Murray Rosenblatt, who are usually credited with independently creating it in its current form. One of the famous applications of kernel density estimation is in estimating the class-conditional marginal densities of data when using a naive Bayes classifier, which can improve its prediction accuracy.
RATS, an abbreviation of Regression Analysis of Time Series, is a statistical package for time series analysis and econometrics. RATS is developed and sold by Estima, Inc., located in Evanston, IL.
In statistics, shrinkage is the reduction in the effects of sampling variation. In regression analysis, a fitted relationship appears to perform less well on a new data set than on the data set used for fitting. In particular the value of the coefficient of determination 'shrinks'. This idea is complementary to overfitting and, separately, to the standard adjustment made in the coefficient of determination to compensate for the subjunctive effects of further sampling, like controlling for the potential of new explanatory terms improving the model by chance: that is, the adjustment formula itself provides "shrinkage." But the adjustment formula yields an artificial shrinkage.
FlexPro is a proprietary software package for analysis and presentation of scientific and technical data, produced by Weisang GmbH. It runs on Microsoft Windows and is available in English, German, Japanese, Chinese and French. FlexPro has its roots in the test and measurement domain and supports different binary file formats of data acquisition instruments and software. In particular, FlexPro can analyze large amounts of data with high sampling rates.
The topic of heteroskedasticity-consistent (HC) standard errors arises in statistics and econometrics in the context of linear regression and time series analysis. These are also known as heteroskedasticity-robust standard errors, Eicker–Huber–White standard errors, to recognize the contributions of Friedhelm Eicker, Peter J. Huber, and Halbert White.
The Heckman correction is a statistical technique to correct bias from non-randomly selected samples or otherwise incidentally truncated dependent variables, a pervasive issue in quantitative social sciences when using observational data. Conceptually, this is achieved by explicitly modelling the individual sampling probability of each observation together with the conditional expectation of the dependent variable. The resulting likelihood function is mathematically similar to the tobit model for censored dependent variables, a connection first drawn by James Heckman in 1974. Heckman also developed a two-step control function approach to estimate this model, which avoids the computational burden of having to estimate both equations jointly, albeit at the cost of inefficiency. Heckman received the Nobel Memorial Prize in Economic Sciences in 2000 for his work in this field.
Psychometric software refers to specialized programs used for the psychometric analysis of data obtained from tests, questionnaires, polls or inventories that measure latent psychoeducational variables. Although some psychometric analyses can be performed using general statistical software such as SPSS, most require specialized tools designed specifically for psychometric purposes.
A Newey–West estimator is used in statistics and econometrics to provide an estimate of the covariance matrix of the parameters of a regression-type model where the standard assumptions of regression analysis do not apply. It was devised by Whitney K. Newey and Kenneth D. West in 1987, although there are a number of later variants. The estimator is used to try to overcome autocorrelation, and heteroskedasticity in the error terms in the models, often for regressions applied to time series data. The abbreviation "HAC," sometimes used for the estimator, stands for "heteroskedasticity and autocorrelation consistent." There are a number of HAC estimators described in, and HAC estimator does not refer uniquely to Newey–West. One version of Newey–West Bartlett requires the user to specify the bandwidth and usage of the Bartlett kernel from Kernel density estimation
Shazam is a comprehensive econometrics and statistics package for estimating, testing, simulating and forecasting many types of econometrics and statistical models. SHAZAM was originally created in 1977 by Kenneth White.
NLOGIT is an extension of the econometric and statistical software package LIMDEP. In addition to the estimation tools in LIMDEP, NLOGIT provides programs for estimation, model simulation and analysis of multinomial choice data, such as brand choice, transportation mode and for survey and market data in which consumers choose among a set of competing alternatives.
Control functions are statistical methods to correct for endogeneity problems by modelling the endogeneity in the error term. The approach thereby differs in important ways from other models that try to account for the same econometric problem. Instrumental variables, for example, attempt to model the endogenous variable X as an often invertible model with respect to a relevant and exogenous instrument Z. Panel analysis uses special data properties to difference out unobserved heterogeneity that is assumed to be fixed over time.
In statistics, a sequence of random variables is homoscedastic if all its random variables have the same finite variance; this is also known as homogeneity of variance. The complementary notion is called heteroscedasticity, also known as heterogeneity of variance. The spellings homoskedasticity and heteroskedasticity are also frequently used. “Skedasticity” comes from the Ancient Greek word “skedánnymi”, meaning “to scatter”. Assuming a variable is homoscedastic when in reality it is heteroscedastic results in unbiased but inefficient point estimates and in biased estimates of standard errors, and may result in overestimating the goodness of fit as measured by the Pearson coefficient.