Ajit C. Tamhane | |
---|---|
Occupation(s) | Professor, author and researcher |
Awards | Fellow, American Statistical Association Fellow, Institute of Mathematical Statistics Fellow, American Association for Advancement of Science Elected member, International Statistical Institute Distinguished Alumnus Award, I.I.T. Bombay |
Academic background | |
Education | B. Tech (First Class Honors) Mechanical Engineering Ph.D., M.S. Statistics |
Alma mater | Indian Institute of Technology, Bombay, India Cornell University, Ithaca, NY |
Academic work | |
Institutions | Northwestern University |
Ajit C. Tamhane is a professor in the Department of Industrial Engineering and Management Sciences (IEMS) at Northwestern University and also holds a courtesy appointment in the Department of Statistics. [1]
Tamhane has published over 100 research articles in refereed journals and has authored four books and co-edited two volumes of collected research papers. His research primarily focuses on multiple testing in clinical trials. He has also worked extensively in other areas of statistics including design of experiments,ranking and selection procedures,chemometrics,clustering methods and statistical inference. [2]
Tamhane is a fellow of the American Statistical Association, [3] the Institute of Mathematical Statistics, [4] the American Association for Advancement of Science [5] and an elected member of the International Statistical Institute. [6]
Tamhane studied at the Indian Institute of Technology Bombay,and received his B.Tech. in Mechanical Engineering in 1968. He moved to the United States in 1970,earning his Ph.D. in Operations Research and Statistics from Cornell University in 1975 under the supervision of Robert E. Bechhofer. [1]
Following his Doctoral Degree,Tamhane joined the IEMS Department at Northwestern University in 1975 as an assistant professor,and was promoted to Associate Professor in 1979,and to Professor in 1987. During 1982–83,he was on sabbatical leave at Cornell University. Since 1986 he has been a faculty member in the Department of Statistics when that department was established. [1]
Tamhane also held several administrative appointments in his career. He held appointment as a Chair of the IEMS Department from 2001 to 2008 and Senior Associate Dean of the McCormick School of Engineering and Applied Science from 2008 to 2018. [1]
Tamhane's research falls in the areas encompassing,multiple testing in clinical trials,ranking and selection procedures,design of experiments,chemometrics,statistical inference and clustering methods. His research in these areas has been supported by National Science Foundation,National Institutes of Health and National Security Agency. He is the author of several books,including Statistics and Data Analysis:From Elementary to Intermediate, [7] Statistical Analysis of Designed Experiments,Predictive Analytics:Parametric Models for Regression and Classification Using R, [8] and Multiple Comparison Procedures. [9] He has also edited two volumes of collected papers:Design of Experiments:Ranking and Selection (with Thomas Santner) published by Marcel Dekker (1984) and Multiple Testing Problems in Pharmaceutical Statistics (with Alex Dmitrienko and Frank Bretz) published by Chapman &Hall (2010). [10]
Tamhane provided and explored several test procedures for the identification of the minimum effective and maximum safe doses of a drug (MINED and MAXSD). [11] He also studied the usage of adaptive extensions of a two-stage group sequential procedure (GSP) in terms of testing primary and secondary endpoints,and discussed different ways to modify the boundaries of the original group sequential procedure to control the familywise error rate,and also provided power comparisons between competing procedures along with clinical trial examples. [12] In his paper published in 2011,he defined classes of parallel gatekeeping procedures. Results of his study indicated an improvement in power of multistage gatekeeping procedures by the usage of α-exhaustive tests for the component procedures. [13] Eric Peritz reviewed Tamhane's book entitled,Multiple Comparison Procedures,as "a comprehensive monograph" in which "the control of familywise error rates is given the lion’s share in the book." [14]
Tamhane's early work,emanating from his Doctoral Dissertation,was on two-stage and multi-stage screening type procedures for selecting the best treatment. He studied the design of such procedures,focusing on the sample size requirements. [15] For the problem of testing multiple treatments with a common control,he generalized the classical balanced incomplete block (BIB) designs to what are called balanced treatment incomplete block (BTIB) designs. [16]
In his study regarding chemical engineering applications,Tamhane proposed a novel nonparametric regression method for high-dimensional data,nonlinear partial least squares (NLPLS),and implemented it with feedforward neural networks. He further determined the performances of NLPLS,projection pursuit,and neural networks in the context of response variable predictions and robustness to starting values. [17] He also conducted multiple studies regarding the detection of gross errors in process data [18] [19] in chemical process networks. [20]
Analysis of variance (ANOVA) is a collection of statistical models and their associated estimation procedures used to analyze the differences among means. ANOVA was developed by the statistician Ronald Fisher. ANOVA is based on the law of total variance,where the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form,ANOVA provides a statistical test of whether two or more population means are equal,and therefore generalizes the t-test beyond two means. In other words,the ANOVA is used to test the difference between two or more means.
In statistics,an outlier is a data point that differs significantly from other observations. An outlier may be due to a variability in the measurement,an indication of novel data,or it may be the result of experimental error;the latter are sometimes excluded from the data set. An outlier can be an indication of exciting possibility,but can also cause serious problems in statistical analyses.
Cross-validation,sometimes called rotation estimation or out-of-sample testing,is any of various similar model validation techniques for assessing how the results of a statistical analysis will generalize to an independent data set. Cross-validation is a resampling method that uses different portions of the data to test and train a model on different iterations. It is mainly used in settings where the goal is prediction,and one wants to estimate how accurately a predictive model will perform in practice. In a prediction problem,a model is usually given a dataset of known data on which training is run,and a dataset of unknown data against which the model is tested. The goal of cross-validation is to test the model's ability to predict new data that was not used in estimating it,in order to flag problems like overfitting or selection bias and to give an insight on how the model will generalize to an independent dataset.
The Kruskal–Wallis test by ranks,Kruskal–Wallis H test,or one-way ANOVA on ranks is a non-parametric method for testing whether samples originate from the same distribution. It is used for comparing two or more independent samples of equal or different sample sizes. It extends the Mann–Whitney U test,which is used for comparing only two groups. The parametric equivalent of the Kruskal–Wallis test is the one-way analysis of variance (ANOVA).
In statistics,the number of degrees of freedom is the number of values in the final calculation of a statistic that are free to vary.
In statistics,resampling is the creation of new samples based on one observed sample. Resampling methods are:
MedCalc is a statistical software package designed for the biomedical sciences. It has an integrated spreadsheet for data input and can import files in several formats.
In statistics,the false discovery rate (FDR) is a method of conceptualizing the rate of type I errors in null hypothesis testing when conducting multiple comparisons. FDR-controlling procedures are designed to control the FDR,which is the expected proportion of "discoveries" that are false. Equivalently,the FDR is the expected ratio of the number of false positive classifications to the total number of positive classifications. The total number of rejections of the null include both the number of false positives (FP) and true positives (TP). Simply put,FDR = FP /. FDR-controlling procedures provide less stringent control of Type I errors compared to family-wise error rate (FWER) controlling procedures,which control the probability of at least one Type I error. Thus,FDR-controlling procedures have greater power,at the cost of increased numbers of Type I errors.
In statistics,family-wise error rate (FWER) is the probability of making one or more false discoveries,or type I errors when performing multiple hypotheses tests.
In statistics,stepwise regression is a method of fitting regression models in which the choice of predictive variables is carried out by an automatic procedure. In each step,a variable is considered for addition to or subtraction from the set of explanatory variables based on some prespecified criterion. Usually,this takes the form of a forward,backward,or combined sequence of F-tests or t-tests.
Omnibus tests are a kind of statistical test. They test whether the explained variance in a set of data is significantly greater than the unexplained variance,overall. One example is the F-test in the analysis of variance. There can be legitimate significant effects within a model even if the omnibus test is not significant. For instance,in a model with two independent variables,if only one variable exerts a significant effect on the dependent variable and the other does not,then the omnibus test may be non-significant. This fact does not affect the conclusions that may be drawn from the one significant variable. In order to test effects within an omnibus test,researchers often use contrasts.
In statistics,the Bonferroni correction is a method to counteract the multiple comparisons problem.
In statistics,the multiple comparisons,multiplicity or multiple testing problem occurs when one considers a set of statistical inferences simultaneously or infers a subset of parameters selected based on the observed values.
In statistics,the closed testing procedure is a general method for performing more than one hypothesis test simultaneously.
The Newman–Keuls or Student–Newman–Keuls (SNK)method is a stepwise multiple comparisons procedure used to identify sample means that are significantly different from each other. It was named after Student (1927),D. Newman,and M. Keuls. This procedure is often used as a post-hoc test whenever a significant difference between three or more sample means has been revealed by an analysis of variance (ANOVA). The Newman–Keuls method is similar to Tukey's range test as both procedures use studentized range statistics. Unlike Tukey's range test,the Newman–Keuls method uses different critical values for different pairs of mean comparisons. Thus,the procedure is more likely to reveal significant differences between group means and to commit type I errors by incorrectly rejecting a null hypothesis when it is true. In other words,the Neuman-Keuls procedure is more powerful but less conservative than Tukey's range test.
Anil K. Bera is an Indian-American econometrician. He is Professor of Economics at University of Illinois at Urbana–Champaign's Department of Economics. He is most noted for his work with Carlos Jarque on the Jarque–Bera test.
Yoav Benjamini is an Israeli statistician best known for development of the “false discovery rate”criterion. He is currently The Nathan and Lily Silver Professor of Applied Statistics at Tel Aviv University.
In statistics,linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables. The case of one explanatory variable is called simple linear regression;for more than one,the process is called multiple linear regression. This term is distinct from multivariate linear regression,where multiple correlated dependent variables are predicted,rather than a single scalar variable.
{{cite web}}
: Check |url=
value (help)