Larry V. Hedges | |
---|---|
Born | Larry Vernon Hedges |
Nationality | American |
Alma mater | Stanford University (Ph.D., 1980) |
Known for | Meta-analysis Statistical methodology |
Awards | Yidan Prize for Education Research (2018) |
Scientific career | |
Fields | Statistics |
Institutions | University of Chicago Northwestern University |
Thesis | Combining the Results of Experiments Using Different Scales of Measurement (1980) |
Doctoral advisor | Ingram Olkin |
Larry Vernon Hedges is a researcher in statistical methods for meta-analysis and evaluation of education policy. He is Professor of Statistics and Education and Social Policy, Institute for Policy Research, Northwestern University. Previously, he was the Stella M. Rowley Distinguished Service Professor of Education, Sociology, Psychology, and Public Policy Studies at the University of Chicago. [1] [2] He is a member of the National Academy of Education and a fellow of the American Academy of Arts and Sciences, the American Educational Research Association, the American Psychological Association, and the American Statistical Association. [3] In 2018, he received the Yidan Prize for Education Research, the world's most prestigious and largest education prize, i.e. USD four million. [4]
He has authored a number of articles and books on statistical methods for meta-analysis, which is the use of statistical methods for combining results from different studies. He also suggested several estimators for effect sizes and derived their properties. He carried out research on the relation of resources available to schools and student achievement, most notably the relation between class size and achievement.
In 1981, Hedges published a paper describing the unbiased standardized mean difference, the g statistic. [5] "It turns out that [Cohen's] d has a slight bias, tending to overestimate the absolute value of in small samples. This bias can be removed by a simple correction that yields an unbiased estimate of, with the unbiased estimate sometimes called Hedges’ g." [6]
In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule, the quantity of interest and its result are distinguished. For example, the sample mean is a commonly used estimator of the population mean.
Econometrics is an application of statistical methods to economic data in order to give empirical content to economic relationships. More precisely, it is "the quantitative analysis of actual economic phenomena based on the concurrent development of theory and observation, related by appropriate methods of inference." An introductory economics textbook describes econometrics as allowing economists "to sift through mountains of data to extract simple relationships." Jan Tinbergen is one of the two founding fathers of econometrics. The other, Ragnar Frisch, also coined the term in the sense in which it is used today.
Statistics is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments.
The following outline is provided as an overview of and topical guide to statistics:
Meta-analysis is the statistical combination of the results of multiple studies addressing a similar research question. An important part of this method involves computing a combined effect size across all of the studies. As such, this statistical approach involves extracting effect sizes and variance measures from various studies. Meta-analyses are integral in supporting research grant proposals, shaping treatment guidelines, and influencing health policies. They are also pivotal in summarizing existing research to guide future studies, thereby cementing their role as a fundamental methodology in metascience.
In statistics, point estimation involves the use of sample data to calculate a single value which is to serve as a "best guess" or "best estimate" of an unknown population parameter. More formally, it is the application of a point estimator to the data to obtain a point estimate.
Statistical bias, in the mathematical field of statistics, is a systematic tendency in which the methods used to gather data and generate statistics present an inaccurate, skewed or biased depiction of reality. Statistical bias exists in numerous stages of the data collection and analysis process, including: the source of the data, the methods used to collect the data, the estimator chosen, and the methods used to analyze the data. Data analysts can take various measures at each stage of the process to reduce the impact of statistical bias in their work. Understanding the source of statistical bias can help to assess whether the observed results are close to actuality. Issues of statistical bias has been argued to be closely linked to issues of statistical validity.
In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator measures the average of the squares of the errors—that is, the average squared difference between the estimated values and the actual value. MSE is a risk function, corresponding to the expected value of the squared error loss. The fact that MSE is almost always strictly positive is because of randomness or because the estimator does not account for information that could produce a more accurate estimate. In machine learning, specifically empirical risk minimization, MSE may refer to the empirical risk, as an estimate of the true MSE.
In statistics, an effect size is a value measuring the strength of the relationship between two variables in a population, or a sample-based estimate of that quantity. It can refer to the value of a statistic calculated from a sample of data, the value of a parameter for a hypothetical population, or to the equation that operationalizes how statistics or parameters lead to the effect size value. Examples of effect sizes include the correlation between two variables, the regression coefficient in a regression, the mean difference, or the risk of a particular event happening. Effect sizes are a complement tool for statistical hypothesis testing, and play an important role in power analyses to assess the sample size required for new experiments. Effect size are fundamental in meta-analyses which aim at provide the combined effect size based on data from multiple studies. The cluster of data-analysis methods concerning effect sizes is referred to as estimation statistics.
In published academic research, publication bias occurs when the outcome of an experiment or research study biases the decision to publish or otherwise distribute it. Publishing only results that show a significant finding disturbs the balance of findings in favor of positive results. The study of publication bias is an important topic in metascience.
Gene V Glass is an American statistician and researcher working in educational psychology and the social sciences. According to the science writer Morton Hunt, he coined the term "meta-analysis" and illustrated its first use in his presidential address to the American Educational Research Association in San Francisco in April, 1976. The most extensive illustration of the technique was to the literature on psychotherapy outcome studies, published in 1980 by Johns Hopkins University Press under the title Benefits of Psychotherapy by Mary Lee Smith, Gene V Glass, and Thomas I. Miller. Gene V Glass is a Regents' Professor Emeritus at Arizona State University in both the educational leadership and policy studies and psychology in education divisions, having retired in 2010 from the Mary Lou Fulton Institute and Graduate School of Education. From 2011 to 2020, he was a senior researcher at the National Education Policy Center, a Research Professor in the School of Education at the University of Colorado Boulder, and a Lecturer in the Connie L. Lurie College of Education at San Jose State University. In 2003, he was elected to membership in the National Academy of Education.
In statistics, sampling errors are incurred when the statistical characteristics of a population are estimated from a subset, or sample, of that population. Since the sample does not include all members of the population, statistics of the sample, such as means and quartiles, generally differ from the statistics of the entire population. The difference between the sample statistic and population parameter is considered the sampling error. For example, if one measures the height of a thousand individuals from a population of one million, the average height of the thousand is typically not the same as the average height of all one million people in the country.
In statistics, (between-) study heterogeneity is a phenomenon that commonly occurs when attempting to undertake a meta-analysis. In a simplistic scenario, studies whose results are to be combined in the meta-analysis would all be undertaken in the same way and to the same experimental protocols. Differences between outcomes would only be due to measurement error. Study heterogeneity denotes the variability in outcomes that goes beyond what would be expected due to measurement error alone.
In statistics, shrinkage is the reduction in the effects of sampling variation. In regression analysis, a fitted relationship appears to perform less well on a new data set than on the data set used for fitting. In particular the value of the coefficient of determination 'shrinks'. This idea is complementary to overfitting and, separately, to the standard adjustment made in the coefficient of determination to compensate for the subjunctive effects of further sampling, like controlling for the potential of new explanatory terms improving the model by chance: that is, the adjustment formula itself provides "shrinkage." But the adjustment formula yields an artificial shrinkage.
Jacob Cohen was an American psychologist and statistician best known for his work on statistical power and effect size, which helped to lay foundations for current statistical meta-analysis and the methods of estimation statistics. He gave his name to such measures as Cohen's kappa, Cohen's d, and Cohen's h.
Ingram Olkin was a professor emeritus and chair of statistics and education at Stanford University and the Stanford Graduate School of Education. He is known for developing statistical analysis for evaluating policies, particularly in education, and for his contributions to meta-analysis, statistics education, multivariate analysis, and majorization theory.
Estimation statistics, or simply estimation, is a data analysis framework that uses a combination of effect sizes, confidence intervals, precision planning, and meta-analysis to plan experiments, analyze data and interpret results. It complements hypothesis testing approaches such as null hypothesis significance testing (NHST), by going beyond the question is an effect present or not, and provides information about how large an effect is. Estimation statistics is sometimes referred to as the new statistics.
The Society for Research Synthesis Methodology is an international, interdisciplinary learned society dedicated to promoting and fostering the study of research synthesis: the process whereby the results of multiple scientific studies are combined. It was founded in November 2005, with its organizational meeting held on November 11 and 12 of that year. It has 85 active members. Its official journal is Research Synthesis Methods, which has been published by Wiley since 2010. It holds annual meetings every summer for members to present their research. The president for the 2023–2024 term is Terri Pigott, and the president-elect is Ian Shrier.
In statistics, a sequence of random variables is homoscedastic if all its random variables have the same finite variance; this is also known as homogeneity of variance. The complementary notion is called heteroscedasticity, also known as heterogeneity of variance. The spellings homoskedasticity and heteroskedasticity are also frequently used. Skedasticity comes from the Ancient Greek word skedánnymi, meaning “to scatter”. Assuming a variable is homoscedastic when in reality it is heteroscedastic results in unbiased but inefficient point estimates and in biased estimates of standard errors, and may result in overestimating the goodness of fit as measured by the Pearson coefficient.