Sam Weerahandi

Last updated

Samaradasa Weerahandi, is the first[ citation needed ] Sri Lankan American statistician to be honored as a Fellow of the American Statistical Association. [1] Also known as Sam Weerahandi, he is a former professor last employed in Corporate America by Pfizer, Inc. as a Senior Director until December 2016.

Contents


Weerahandi introduced a number of notions, concepts, and methods for statistical analysis of small samples based on exact probability statements, which are referred to as exact statistics. [2] [3] Commonly known as generalized inferences, the new concepts include generalized p-value generalized confidence intervals and generalized point estimation. These methods, which are discussed in the two books he wrote, have been found to produce more accurate inferences compared to classical methods based on asymptotic methods when the sample size is small or when large samples tends to be noisy. [4] He used statistical techniques based on these notions to bring statistical practice into business management.

Leadership Highlights

Bibliography

Measures: Exact Methods in MANOVA and Mixed Models. Wiley, Hoboken, New Jersey, 2004.

Related Research Articles

<span class="mw-page-title-main">Statistics</span> Study of the collection, analysis, interpretation, and presentation of data

Statistics is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments.

<span class="mw-page-title-main">Statistical inference</span> Process of using data analysis

Statistical inference is the process of using data analysis to infer properties of an underlying distribution of probability. Inferential statistical analysis infers properties of a population, for example by testing hypotheses and deriving estimates. It is assumed that the observed data set is sampled from a larger population.

The following outline is provided as an overview of and topical guide to statistics:

In statistics, point estimation involves the use of sample data to calculate a single value which is to serve as a "best guess" or "best estimate" of an unknown population parameter. More formally, it is the application of a point estimator to the data to obtain a point estimate.

In statistics, interval estimation is the use of sample data to estimate an interval of possible values of a parameter of interest. This is in contrast to point estimation, which gives a single value.

<span class="mw-page-title-main">Confidence interval</span> Range to estimate an unknown parameter

Informally, in frequentist statistics, a confidence interval (CI) is an interval which is expected to typically contain the parameter being estimated. More specifically, given a confidence level , a CI is a random interval which contains the parameter being estimated % of the time. The confidence level, degree of confidence or confidence coefficient represents the long-run proportion of CIs that theoretically contain the true value of the parameter; this is tantamount to the nominal coverage probability. For example, out of all intervals computed at the 95% level, 95% of them should contain the parameter's true value.

<span class="mw-page-title-main">M. S. Bartlett</span> English statistician (1910–2002)

Maurice Stevenson Bartlett FRS was an English statistician who made particular contributions to the analysis of data with spatial and temporal patterns. He is also known for his work in the theory of statistical inference and in multivariate analysis.

<span class="mw-page-title-main">Mathematical statistics</span> Branch of statistics

Mathematical statistics is the application of probability theory, a branch of mathematics, to statistics, as opposed to techniques for collecting statistical data. Specific mathematical techniques which are used for this include mathematical analysis, linear algebra, stochastic analysis, differential equations, and measure theory.

This glossary of statistics and probability is a list of definitions of terms and concepts used in the mathematical sciences of statistics and probability, their sub-disciplines, and related fields. For additional related terms, see Glossary of mathematics and Glossary of experimental design.

A permutation test is an exact statistical hypothesis test making use of the proof by contradiction. A permutation test involves two or more samples. The null hypothesis is that all samples come from the same distribution . Under the null hypothesis, the distribution of the test statistic is obtained by calculating all possible values of the test statistic under possible rearrangements of the observed data. Permutation tests are, therefore, a form of resampling.

In statistics, the Behrens–Fisher problem, named after Walter-Ulrich Behrens and Ronald Fisher, is the problem of interval estimation and hypothesis testing concerning the difference between the means of two normally distributed populations when the variances of the two populations are not assumed to be equal, based on two independent samples.

Fiducial inference is one of a number of different types of statistical inference. These are rules, intended for general application, by which conclusions can be drawn from samples of data. In modern statistical practice, attempts to work with fiducial inference have fallen out of fashion in favour of frequentist inference, Bayesian inference and decision theory. However, fiducial inference is important in the history of statistics since its development led to the parallel development of concepts and tools in theoretical statistics that are widely used. Some current research in statistical methodology is either explicitly linked to fiducial inference or is closely connected to it.

Exact statistics, such as that described in exact test, is a branch of statistics that was developed to provide more accurate results pertaining to statistical testing and interval estimation by eliminating procedures based on asymptotic and approximate statistical methods. The main characteristic of exact methods is that statistical tests and confidence intervals are based on exact probability statements that are valid for any sample size. Exact statistical methods help avoid some of the unreasonable assumptions of traditional statistical methods, such as the assumption of equal variances in classical ANOVA. They also allow exact inference on variance components of mixed models.

In statistics, a generalized p-value is an extended version of the classical p-value, which except in a limited number of applications, provides only approximate solutions.

In statistical inference, the concept of a confidence distribution (CD) has often been loosely referred to as a distribution function on the parameter space that can represent confidence intervals of all levels for a parameter of interest. Historically, it has typically been constructed by inverting the upper limits of lower sided confidence intervals of all levels, and it was also commonly associated with a fiducial interpretation, although it is a purely frequentist concept. A confidence distribution is NOT a probability distribution function of the parameter of interest, but may still be a function useful for making inferences.

In statistics, the multivariate Behrens–Fisher problem is the problem of testing for the equality of means from two multivariate normal distributions when the covariance matrices are unknown and possibly not equal. Since this is a generalization of the univariate Behrens-Fisher problem, it inherits all of the difficulties that arise in the univariate problem.

Let be independent, identically distributed real-valued random variables with common characteristic function . The empirical characteristic function (ECF) defined as

In statistics, when selecting a statistical model for given data, the relative likelihood compares the relative plausibilities of different candidate models or of different values of a parameter of a single model.

References

  1. 1996 (choose initial W and then click submit) http://www.amstat.org/awards/fellowslist.cfm
  2. Liao, Chen-Tuo; Li, Chi-Rong (2010). "Generalized Inference". Encyclopedia of Biopharmaceutical Statistics. pp. 547–549. doi:10.3109/9781439822463.088. ISBN   978-1-4398-2246-3.
  3. Krishnamoorthy, K.; Lu, Yong (2003). "Inferences on the Common Mean of Several Normal Populations Based on the Generalized Variable Method". Biometrics. 59 (2): 237–247. doi:10.1111/1541-0420.00030. PMID   12926708. S2CID   13339220.
  4. "Home". weerahandi.org.

Multivariate Analysis, 220, 226-233.

Computational Statistics and Data Analysis}, 55, 1993-2002.

Applications to Army test and evaluation, Technometrics, 47, 312-322.