Ajit C. Tamhane | |
---|---|
Occupation(s) | Professor, author and researcher |
Awards | Fellow, American Statistical Association Fellow, Institute of Mathematical Statistics Fellow, American Association for Advancement of Science Elected member, International Statistical Institute Distinguished Alumnus Award, I.I.T. Bombay |
Academic background | |
Education | B. Tech (First Class Honors) Mechanical Engineering Ph.D., M.S. Statistics |
Alma mater | Indian Institute of Technology, Bombay, India Cornell University, Ithaca, NY |
Academic work | |
Institutions | Northwestern University |
Ajit C. Tamhane is a professor in the Department of Industrial Engineering and Management Sciences (IEMS) at Northwestern University and also holds a courtesy appointment in the Department of Statistics. [1]
Tamhane has published over 100 research articles in refereed journals and has authored four books and co-edited two volumes of collected research papers. His research primarily focuses on multiple testing in clinical trials. He has also worked extensively in other areas of statistics including design of experiments,ranking and selection procedures,chemometrics,clustering methods and statistical inference. [2]
Tamhane is a fellow of the American Statistical Association, [3] the Institute of Mathematical Statistics, [4] the American Association for Advancement of Science [5] and an elected member of the International Statistical Institute. [6]
Tamhane studied at the Indian Institute of Technology Bombay,and received his B.Tech. in Mechanical Engineering in 1968. He moved to the United States in 1970,earning his Ph.D. in Operations Research and Statistics from Cornell University in 1975 under the supervision of Robert E. Bechhofer. [1]
Following his Doctoral Degree,Tamhane joined the IEMS Department at Northwestern University in 1975 as an assistant professor,and was promoted to Associate Professor in 1979,and to Professor in 1987. During 1982–83,he was on sabbatical leave at Cornell University. Since 1986 he has been a faculty member in the Department of Statistics when that department was established. [1]
Tamhane also held several administrative appointments in his career. He held appointment as a Chair of the IEMS Department from 2001 to 2008 and Senior Associate Dean of the McCormick School of Engineering and Applied Science from 2008 to 2018. [1]
Tamhane's research falls in the areas encompassing,multiple testing in clinical trials,ranking and selection procedures,design of experiments,chemometrics,statistical inference and clustering methods. His research in these areas has been supported by National Science Foundation,National Institutes of Health and National Security Agency. He is the author of several books,including Statistics and Data Analysis:From Elementary to Intermediate, [7] Statistical Analysis of Designed Experiments,Predictive Analytics:Parametric Models for Regression and Classification Using R, [8] and Multiple Comparison Procedures. [9] He has also edited two volumes of collected papers:Design of Experiments:Ranking and Selection (with Thomas Santner) published by Marcel Dekker (1984) and Multiple Testing Problems in Pharmaceutical Statistics (with Alex Dmitrienko and Frank Bretz) published by Chapman &Hall (2010). [10]
Tamhane provided and explored several test procedures for the identification of the minimum effective and maximum safe doses of a drug (MINED and MAXSD). [11] He also studied the usage of adaptive extensions of a two-stage group sequential procedure (GSP) in terms of testing primary and secondary endpoints,and discussed different ways to modify the boundaries of the original group sequential procedure to control the familywise error rate,and also provided power comparisons between competing procedures along with clinical trial examples. [12] In his paper published in 2011,he defined classes of parallel gatekeeping procedures. Results of his study indicated an improvement in power of multistage gatekeeping procedures by the usage of α-exhaustive tests for the component procedures. [13] Eric Peritz reviewed Tamhane's book entitled,Multiple Comparison Procedures,as "a comprehensive monograph" in which "the control of familywise error rates is given the lion’s share in the book." [14]
Tamhane's early work,emanating from his Doctoral Dissertation,was on two-stage and multi-stage screening type procedures for selecting the best treatment. He studied the design of such procedures,focusing on the sample size requirements. [15] For the problem of testing multiple treatments with a common control,he generalized the classical balanced incomplete block (BIB) designs to what are called balanced treatment incomplete block (BTIB) designs. [16]
In his study regarding chemical engineering applications,Tamhane proposed a novel nonparametric regression method for high-dimensional data,nonlinear partial least squares (NLPLS),and implemented it with feedforward neural networks. He further determined the performances of NLPLS,projection pursuit,and neural networks in the context of response variable predictions and robustness to starting values. [17] He also conducted multiple studies regarding the detection of gross errors in process data [18] [19] in chemical process networks. [20]
In statistics,an outlier is a data point that differs significantly from other observations. An outlier may be due to a variability in the measurement,an indication of novel data,or it may be the result of experimental error;the latter are sometimes excluded from the data set. An outlier can be an indication of exciting possibility,but can also cause serious problems in statistical analyses.
Cross-validation,sometimes called rotation estimation or out-of-sample testing,is any of various similar model validation techniques for assessing how the results of a statistical analysis will generalize to an independent data set. Cross-validation includes resampling and sample splitting methods that use different portions of the data to test and train a model on different iterations. It is often used in settings where the goal is prediction,and one wants to estimate how accurately a predictive model will perform in practice. It can also be used to assess the quality of a fitted model and the stability of its parameters.
There are two main uses of the term calibration in statistics that denote special types of statistical inference problems. Calibration can mean
The Kruskal–Wallis test by ranks,Kruskal–Wallis test,or one-way ANOVA on ranks is a non-parametric method for testing whether samples originate from the same distribution. It is used for comparing two or more independent samples of equal or different sample sizes. It extends the Mann–Whitney U test,which is used for comparing only two groups. The parametric equivalent of the Kruskal–Wallis test is the one-way analysis of variance (ANOVA).
Data dredging is the misuse of data analysis to find patterns in data that can be presented as statistically significant,thus dramatically increasing and understating the risk of false positives. This is done by performing many statistical tests on the data and only reporting those that come back with significant results.
In statistics,resampling is the creation of new samples based on one observed sample. Resampling methods are:
MedCalc is a statistical software package designed for the biomedical sciences. It has an integrated spreadsheet for data input and can import files in several formats.
In statistics,the false discovery rate (FDR) is a method of conceptualizing the rate of type I errors in null hypothesis testing when conducting multiple comparisons. FDR-controlling procedures are designed to control the FDR,which is the expected proportion of "discoveries" that are false. Equivalently,the FDR is the expected ratio of the number of false positive classifications to the total number of positive classifications. The total number of rejections of the null include both the number of false positives (FP) and true positives (TP). Simply put,FDR = FP /. FDR-controlling procedures provide less stringent control of Type I errors compared to family-wise error rate (FWER) controlling procedures,which control the probability of at least one Type I error. Thus,FDR-controlling procedures have greater power,at the cost of increased numbers of Type I errors.
In statistics,family-wise error rate (FWER) is the probability of making one or more false discoveries,or type I errors when performing multiple hypotheses tests.
In statistics,stepwise regression is a method of fitting regression models in which the choice of predictive variables is carried out by an automatic procedure. In each step,a variable is considered for addition to or subtraction from the set of explanatory variables based on some prespecified criterion. Usually,this takes the form of a forward,backward,or combined sequence of F-tests or t-tests.
Chi-square automatic interaction detection (CHAID) is a decision tree technique based on adjusted significance testing. The technique was developed in South Africa in 1975 and was published in 1980 by Gordon V. Kass,who had completed a PhD thesis on this topic. CHAID can be used for prediction as well as classification,and for detection of interaction between variables. CHAID is based on a formal extension of AID and THAID procedures of the 1960s and 1970s,which in turn were extensions of earlier research,including that performed by Belson in the UK in the 1950s. A history of earlier supervised tree methods together with a detailed description of the original CHAID algorithm and the exhaustive CHAID extension by Biggs,De Ville,and Suen,can be found in Ritschard.
Omnibus tests are a kind of statistical test. They test whether the explained variance in a set of data is significantly greater than the unexplained variance,overall. One example is the F-test in the analysis of variance. There can be legitimate significant effects within a model even if the omnibus test is not significant. For instance,in a model with two independent variables,if only one variable exerts a significant effect on the dependent variable and the other does not,then the omnibus test may be non-significant. This fact does not affect the conclusions that may be drawn from the one significant variable. In order to test effects within an omnibus test,researchers often use contrasts.
In statistics,the multiple comparisons,multiplicity or multiple testing problem occurs when one considers a set of statistical inferences simultaneously or estimates a subset of parameters selected based on the observed values.
In statistics,the closed testing procedure is a general method for performing more than one hypothesis test simultaneously.
The Newman–Keuls or Student–Newman–Keuls (SNK)method is a stepwise multiple comparisons procedure used to identify sample means that are significantly different from each other. It was named after Student (1927),D. Newman,and M. Keuls. This procedure is often used as a post-hoc test whenever a significant difference between three or more sample means has been revealed by an analysis of variance (ANOVA). The Newman–Keuls method is similar to Tukey's range test as both procedures use studentized range statistics. Unlike Tukey's range test,the Newman–Keuls method uses different critical values for different pairs of mean comparisons. Thus,the procedure is more likely to reveal significant differences between group means and to commit type I errors by incorrectly rejecting a null hypothesis when it is true. In other words,the Neuman-Keuls procedure is more powerful but less conservative than Tukey's range test.
Yoav Benjamini is an Israeli statistician best known for development of the "false discovery rate" criterion. He is currently The Nathan and Lily Silver Professor of Applied Statistics at Tel Aviv University.
JASP is a free and open-source program for statistical analysis supported by the University of Amsterdam. It is designed to be easy to use,and familiar to users of SPSS. It offers standard analysis procedures in both their classical and Bayesian form. JASP generally produces APA style results tables and plots to ease publication. It promotes open science via integration with the Open Science Framework and reproducibility by integrating the analysis settings into the results. The development of JASP is financially supported by several universities and research funds. As the JASP GUI is developed in C++ using Qt framework,some of the team left to make a notable fork which is Jamovi which has its GUI developed in JavaScript and HTML5.
In statistics,linear regression is a statistical model which estimates the linear relationship between a scalar response and one or more explanatory variables. The case of one explanatory variable is called simple linear regression;for more than one,the process is called multiple linear regression. This term is distinct from multivariate linear regression,where multiple correlated dependent variables are predicted,rather than a single scalar variable. If the explanatory variables are measured with error then errors-in-variables models are required,also known as measurement error models.
{{cite web}}
: Missing or empty |url=
(help)