Murray Aitkin

Last updated

Murray Aitkin
Alma materSydney University
Known forGeneralised linear models
Scientific career
Fields Statistics
Institutions University of Melbourne

Murray Aitkin is an Australian statistician who specialises in statistical models. He attained his BSc, PhD, and DSc in Sydney University for mathematical statistics in 1961, 1966 and 1997, respectively. [1]

Contents

Academic career

From 1961 to 1964, he was a teaching fellow at Sydney University. Then, from 1996 to 2004, he was a professor in the department of statistics at Newcastle University. He was also a director for the Statistical Consultancy Service at Newcastle University from 1996 to 2000. [1]

Between 2000 and 2002, he went on leave from Newcastle to take on the role of chief statistician at the Education Statistics Services Institute in Washington DC. [1]

Societal recognition

Between 1971 and 1972, he was a senior fellow for Fulbright, an American exchange scholarship program. Between 1976 and 1979, he was a professorial fellow at the Social Science Research Council, in Lancaster University. In 1982, he was named as an Elected Member of the International Statistical Institute, and in 1984 as a Fellow of the American Statistical Association. [1]

Generalised linear mixed models

Aitkin's research has been important with regards to different types of mixture models, such as generalised linear mixed models (GLMM), latent class models, and other finite mixture models. Usually, when random effects occur in GLMMs, a normal distribution of N(0,σ2) is assumed. However, Aitkin uses a nonparametric structure (a type of structure which does not involve using set distributions) instead. [2]

In 1981, he co-authored with Darrel Bock and published a paper titled: "Marginal maximum likelihood estimation of item parameters: Application of an EM algorithm" to Psychometrika in which he discussed GLMMs. [3] It was one of first papers to discuss this topic and has received almost 3,000 citations. [2] [4]

Related Research Articles

Statistics is a field of inquiry that studies the collection, analysis, interpretation, and presentation of data. It is applicable to a wide variety of academic disciplines, from the physical and social sciences to the humanities; it is also used and misused for making informed decisions in all areas of business and government.

The likelihood function represents the probability of random variable realizations conditional on particular values of the statistical parameters. Thus, when evaluated on a given sample, the likelihood function indicates which parameter values are more likely than others, in the sense that they would have made the observed data more probable. Consequently, the likelihood is often written as instead of , to emphasize that it is to be understood as a function of the parameters instead of the random variable .

In statistics, point estimation involves the use of sample data to calculate a single value which is to serve as a "best guess" or "best estimate" of an unknown population parameter. More formally, it is the application of a point estimator to the data to obtain a point estimate.

In psychometrics, item response theory (IRT) is a paradigm for the design, analysis, and scoring of tests, questionnaires, and similar instruments measuring abilities, attitudes, or other variables. It is a theory of testing based on the relationship between individuals' performances on a test item and the test takers' levels of performance on an overall measure of the ability that item was designed to measure. Several different statistical models are used to represent both item and test taker characteristics. Unlike simpler alternatives for creating scales and evaluating questionnaire responses, it does not assume that each item is equally difficult. This distinguishes IRT from, for instance, Likert scaling, in which "All items are assumed to be replications of each other or in other words items are considered to be parallel instruments". By contrast, item response theory treats the difficulty of each item as information to be incorporated in scaling items.

<span class="mw-page-title-main">Expectation–maximization algorithm</span> Iterative method for finding maximum likelihood estimates in statistical models

In statistics, an expectation–maximization (EM) algorithm is an iterative method to find (local) maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step.

In statistics, a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observation belongs. Formally a mixture model corresponds to the mixture distribution that represents the probability distribution of observations in the overall population. However, while problems associated with "mixture distributions" relate to deriving the properties of the overall population from those of the sub-populations, "mixture models" are used to make statistical inferences about the properties of the sub-populations given only observations on the pooled population, without sub-population identity information.

<span class="mw-page-title-main">Mathematical statistics</span> Branch of statistics

Mathematical statistics is the application of probability theory, a branch of mathematics, to statistics, as opposed to techniques for collecting statistical data. Specific mathematical techniques which are used for this include mathematical analysis, linear algebra, stochastic analysis, differential equations, and measure theory.

Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured empirical data that has a random component. The parameters describe an underlying physical setting in such a way that their value affects the distribution of the measured data. An estimator attempts to approximate the unknown parameters using the measurements. In estimation theory, two approaches are generally considered:

<span class="mw-page-title-main">Structural equation modeling</span> Form of causal modeling that fit networks of constructs to data

Structural equation modeling (SEM) is a label for a diverse set of methods used by scientists in both experimental and observational research across the sciences, business, and other fields. It is used most in the social and behavioral sciences. A definition of SEM is difficult without reference to highly technical language, but a good starting place is the name itself.

In robust statistics, robust regression seeks to overcome some limitations of traditional regression analysis. A regression analysis models the relationship between one or more independent variables and a dependent variable. Standard types of regression, such as ordinary least squares, have favourable properties if their underlying assumptions are true, but can give misleading results otherwise. Robust regression methods are designed to limit the effect that violations of assumptions by the underlying data-generating process have on regression estimates.

A mixed model, mixed-effects model or mixed error-component model is a statistical model containing both fixed effects and random effects. These models are useful in a wide variety of disciplines in the physical, biological and social sciences. They are particularly useful in settings where repeated measurements are made on the same statistical units, or where measurements are made on clusters of related statistical units. Because of their advantage in dealing with missing values, mixed effects models are often preferred over more traditional approaches such as repeated measures analysis of variance.

Karl Gustav Jöreskog is a Swedish statistician. Jöreskog is a Professor Emeritus at Uppsala University, and a co-author of the LISREL statistical program. He is also a member of the Royal Swedish Academy of Sciences. Jöreskog received his bachelor's, master's, and doctoral degrees in Uppsala University. He is also a former student of Herman Wold. He was a statistician at Educational Testing Service (ETS) and a visiting professor at Princeton University.

Psychometric software is software that is used for psychometric analysis of data from tests, questionnaires, or inventories reflecting latent psychoeducational variables. While some psychometric analyses can be performed with standard statistical software like SPSS, most analyses require specialized tools.

<span class="mw-page-title-main">Anton Formann</span>

Anton K. Formann was an Austrian research psychologist, statistician, and psychometrician. He is renowned for his contributions to item response theory, latent class analysis, the measurement of change, mixture models, categorical data analysis, and quantitative methods for research synthesis (meta-analysis).

Adrian John Baddeley is a statistical scientist working in the fields of spatial statistics, statistical computing, stereology and stochastic geometry.

Constance van Eeden was a Dutch mathematical statistician who made "exceptional contributions to the development of statistical sciences in Canada". She was interested in nonparametric statistics including maximum likelihood estimation and robust statistics, and did foundational work on parameter spaces.

Friedrich Maria Urban was a psychologist from the Austro-Hungarian Empire, known for the introduction of probability weightings used in experimental psychology.

Xiaohong Chen is a Chinese economist who currently serves as the Malcolm K. Brachman Professor of Economics at Yale University. She is a fellow of the Econometric Society and a laureate of the China Economics Prize. As one of the leading experts in econometrics, her research focuses on econometric theory, Semi/nonparametric estimation and inference methods, Sieve methods, Nonlinear time series, and Semi/nonparametric models. She was elected to the American Academy of Arts and Sciences in 2019.

Siddhartha Chib is an econometrician and statistician, the Harry C. Hartkopf Professor of Econometrics and Statistics at Washington University in St. Louis. His work is primarily in Bayesian statistics, econometrics, and Markov chain Monte Carlo methods.

References

  1. 1 2 3 4 Aitkin, Murray (23 May 2015). "Curriculum Vitae" (PDF). Retrieved 19 July 2021.{{cite web}}: CS1 maint: url-status (link)
  2. 1 2 Agresti, Alan; Bartolucci, Francesco; Mira, Antonietta (8 February 2021). "Reflections on Murray Aitkin's contributions to nonparametric mixture models and Bayes factors". Statistical Modelling: 1471082X20981312. doi: 10.1177/1471082X20981312 . ISSN   1471-082X.
  3. Bock, R. Darrell; Aitkin, Murray (1 December 1981). "Marginal maximum likelihood estimation of item parameters: Application of an EM algorithm". Psychometrika. 46 (4): 443–459. doi:10.1007/BF02293801. ISSN   1860-0980.
  4. "Marginal maximum likelihood estimation of item parameters: Application of an EM algorithm". scholar.google.co.uk. Retrieved 19 July 2021.