Phylogenetic autocorrelation

Last updated

Phylogenetic autocorrelation also known as Galton's problem, after Sir Francis Galton who described it, is the problem of drawing inferences from cross-cultural data, due to the statistical phenomenon now called autocorrelation. The problem is now recognized as a general one that applies to all nonexperimental studies and to experimental design as well. It is most simply described as the problem of external dependencies in making statistical estimates when the elements sampled are not statistically independent. Asking two people in the same household whether they watch TV, for example, does not give you statistically independent answers. The sample size, n, for independent observations in this case is one, not two. Once proper adjustments are made that deal with external dependencies, then the axioms of probability theory concerning statistical independence will apply. These axioms are important for deriving measures of variance, for example, or tests of statistical significance.

Contents

Origin

In 1888, Galton was present when Sir Edward Tylor presented a paper at the Royal Anthropological Institute. Tylor had compiled information on institutions of marriage and descent for 350 cultures and examined the associations between these institutions and measures of societal complexity. Tylor interpreted his results as indications of a general evolutionary sequence, in which institutions change focus from the maternal line to the paternal line as societies become increasingly complex. Galton disagreed, pointing out that similarity between cultures could be due to borrowing, could be due to common descent, or could be due to evolutionary development; he maintained that without controlling for borrowing and common descent one cannot make valid inferences regarding evolutionary development. Galton's critique has become the eponymous Galton's Problem, [1] :175 as named by Raoul Naroll, [2] [3] who proposed the first statistical solutions.

By the early 20th century unilineal evolutionism was abandoned and along with it the drawing of direct inferences from correlations to evolutionary sequences. Galton's criticisms proved equally valid, however, for inferring functional relations from correlations. The problem of autocorrelation remained.

Solutions

Statistician William S. Gosset in 1914 developed methods of eliminating spurious correlation due to how position in time or space affects similarities. Today's election polls have a similar problem: the closer the poll to the election, the less individuals make up their mind independently, and the greater the unreliability of the polling results, especially the margin of error or confidence limits. The effective n of independent cases from their sample drops as the election nears. Statistical significance falls with lower effective sample size.

The problem pops up in sample surveys when sociologists want to reduce the travel time to do their interviews, and hence they divide their population into local clusters and sample the clusters randomly, then sample again within the clusters. If they interview n people in clusters of size m the effective sample size (efs) would have a lower limit of 1 + (n − 1) / m if everyone in each cluster were identical. When there are only partial similarities within clusters, the m in this formula has to be lowered accordingly. A formula of this sort is 1 + d (n − 1) where d is the intraclass correlation for the statistic in question. [4] In general, estimation of the appropriate efs depends on the statistic estimated, as for example, mean, chi-square, correlation, regression coefficient, and their variances.

For cross-cultural studies, Murdock and White [5] estimated the size of patches of similarities in their sample of 186 societies. The four variables they tested – language, economy, political integration, and descent – had patches of similarities that varied from size three to size ten. A very crude rule of thumb might be to divide the square root of the similarity-patch sizes into n, so that the effective sample sizes are 58 and 107 for these patches, respectively. Again, statistical significance falls with lower effective sample size.

In modern analysis spatial lags have been modelled in order to estimate the degree of globalization on modern societies. [6]

Spatial dependency or auto-correlation is a fundamental concept in geography. Methods developed by geographers that measure and control for spatial autocorrelation [7] [8] do far more than reduce the effective n for tests of significance of a correlation. One example is the complicated hypothesis that "the presence of gambling in a society is directly proportional to the presence of a commercial money and to the presence of considerable socioeconomic differences and is inversely related to whether or not the society is a nomadic herding society." [9] Tests of this hypothesis in a sample of 60 societies failed to reject the null hypothesis. Autocorrelation analysis, however, showed a significant effect of socioeconomic differences. [10]

How prevalent is autocorrelation among the variables studied in cross-cultural research? A test by Anthon Eff on 1700 variables in the cumulative database for the Standard Cross-Cultural Sample, published in World Cultures, measured Moran's I for spatial autocorrelation (distance), linguistic autocorrelation (common descent), and autocorrelation in cultural complexity (mainline evolution). "The results suggest that ... it would be prudent to test for spatial and phylogenetic autoccorrelation when conducting regression analyses with the Standard Cross-Cultural Sample." [11] The use of autocorrelation tests in exploratory data analysis is illustrated, showing how all variables in a given study can be evaluated for nonindependence of cases in terms of distance, language, and cultural complexity. The methods for estimating these autocorrelation effects are then explained and illustrated for ordinary least squares regression using again the Moran I significance measure of autocorrelation.

When autocorrelation is present, it can often be removed to get unbiased estimates of regression coefficients and their variances by constructing a respecified dependent variable that is "lagged" by weightings on the dependent variable on other locations, where the weights are degree of relationship. This lagged dependent variable is endogenous, and estimation requires either two-stage least squares or maximum likelihood methods. [12]

Resources

A public server, if used externally at http://SocSciCompute.ss.uci.edu Archived 2016-02-20 at the Wayback Machine , offers ethnographic data, variables and tools for inference with R scripts by Dow (2007) and Eff and Dow (2009) in an NSF supported Galaxy (http://getgalaxy.org) framework (https://www.xsede.org) for instructors, students and researchers to do "CoSSci Galaxy" cross-cultural research modeling Archived 2016-02-20 at the Wayback Machine with controls for Galton's problem using Standard Cross-Cultural Sample variables at https://web.archive.org/web/20160402201432/https://dl.dropboxusercontent.com/u/9256203/SCCScodebook.txt.

Opportunities

In anthropology, where Tylor's problem was first recognized by the statistician Galton in 1889, it is still not widely recognized that there are standard statistical adjustments for the problem of patches of similarity in observed cases and opportunities for new discoveries using autocorrelation methods. Some cross-cultural researchers (see, e.g., Korotayev and de Munck 2003) [13] have begun to realize that evidence of diffusion, historical origin, and other sources of similarity among related societies or individuals should be renamed Galton's Opportunity and Galton's Asset rather than Galton's Problem. Researchers now use longitudinal, cross-cultural, and regional variation analysis routinely to analyze all the competing hypotheses: functional relationships, diffusion, common historical origin, multilineal evolution, co-adaptation with environment, and complex social interaction dynamics. [14]

Controversies

Within anthropology, the problem of phylogenetic auocorrelation is often given as a cause to reject comparative studies altogether. Since the problem is a general one, common to the sciences and statistical inference generally, this particular criticism of cross-cultural or comparative studies – and there are many – is one that, logically speaking, amounts to a rejection of science and statistics altogether. Any data collected and analyzed by ethnographers, for example, is equally subject to autocorrelation, understood in its most general sense. A critique of the anticomparative critique is not limited to statistical comparison since it would apply as well to the analysis of text. That is, the analysis and use of text in argumentation is subject to critique as to the evidential basis of inference. Reliance purely on rhetoric is no protection against critique as to the validity of argument and its evidentiary basis.

There is little doubt, however, that the community of cross-cultural researchers have been remiss in ignoring autocorrelation. Expert investigation of this question shows results that "strongly suggest that the extensive reporting of naïve chi-square independence tests using cross-cultural data sets over the past several decades has led to incorrect rejection of null hypotheses at levels much higher than the expected 5% rate." [15] :247 The investigator concludes that "Incorrect theories that have been 'saved' by naïve chi-square tests with comparative data may yet be more rigorously tested another day." [15] :270 Once again, the adjusted variance of a cluster sample is given as one multiplied by 1 + d (k + 1) where k is the average size of a cluster, and a more complicated correction is given for the variance of contingency table correlations with r rows and c columns. Since this critique was published in 1993, and others like it, more authors have begun to adopt corrections for Galton's problem, but the majority in the cross-cultural field have not. Consequently, a large proportion of published results that rely on naive significance tests and that adopt the P < 0.05 rather than a P < 0.005 standard are likely to be in error because they are more susceptible to type I error, which is to reject the null hypothesis when it is true.

Some cross-cultural researchers reject the seriousness of the problem of autocorrelation because, they argue, estimates of correlations and means may be unbiased even if autocorrelation, weak or strong, is present. Without investigating autocorrelation, however, they may still mis-estimate statistics dealing with relationships among variables. In regression analysis, for example, examining the patterns of autocorrelated residuals may give important clues to third factors that may affect the relationships among variables but that have not been included in the regression model. Second, if there are clusters of similar and related societies in the sample, measures of variance will be underestimated, leading to spurious statistical conclusions. for example, exaggerating the statistical significance of correlations. Third, the underestimation of variance makes it difficult to test for replication of results from two different samples, as the results will more often be rejected as similar.

See also

Related Research Articles

<span class="mw-page-title-main">Autocorrelation</span> Correlation of a signal with a time-shifted copy of itself, as a function of shift

Autocorrelation, sometimes known as serial correlation in the discrete time case, is the correlation of a signal with a delayed copy of itself as a function of delay. Informally, it is the similarity between observations of a random variable as a function of the time lag between them. The analysis of autocorrelation is a mathematical tool for finding repeating patterns, such as the presence of a periodic signal obscured by noise, or identifying the missing fundamental frequency in a signal implied by its harmonic frequencies. It is often used in signal processing for analyzing functions or series of values, such as time domain signals.

<span class="mw-page-title-main">Statistics</span> Study of the collection, analysis, interpretation, and presentation of data

Statistics is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments.

The following outline is provided as an overview of and topical guide to statistics:

<span class="mw-page-title-main">Pearson correlation coefficient</span> Measure of linear correlation

In statistics, the Pearson correlation coefficient (PCC) is a correlation coefficient that measures linear correlation between two sets of data. It is the ratio between the covariance of two variables and the product of their standard deviations; thus, it is essentially a normalized measurement of the covariance, such that the result always has a value between −1 and 1. As with covariance itself, the measure can only reflect a linear correlation of variables, and ignores many other types of relationships or correlations. As a simple example, one would expect the age and height of a sample of teenagers from a high school to have a Pearson correlation coefficient significantly greater than 0, but less than 1.

Nonparametric statistics is a type of statistical analysis that makes minimal assumptions about the underlying distribution of the data being studied. Often these models are infinite-dimensional, rather than finite dimensional, as is parametric statistics. Nonparametric statistics can be used for descriptive statistics or statistical inference. Nonparametric tests are often used when the assumptions of parametric tests are evidently violated.

<span class="mw-page-title-main">Time series</span> Sequence of data points over time

In mathematics, a time series is a series of data points indexed in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. Examples of time series are heights of ocean tides, counts of sunspots, and the daily closing value of the Dow Jones Industrial Average.

In statistics, an effect size is a value measuring the strength of the relationship between two variables in a population, or a sample-based estimate of that quantity. It can refer to the value of a statistic calculated from a sample of data, the value of a parameter for a hypothetical population, or to the equation that operationalizes how statistics or parameters lead to the effect size value. Examples of effect sizes include the correlation between two variables, the regression coefficient in a regression, the mean difference, or the risk of a particular event happening. Effect sizes complement statistical hypothesis testing, and play an important role in power analyses, sample size planning, and in meta-analyses. The cluster of data-analysis methods concerning effect sizes is referred to as estimation statistics.

<span class="mw-page-title-main">Regression analysis</span> Set of statistical processes for estimating the relationships among variables

In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable and one or more independent variables. The most common form of regression analysis is linear regression, in which one finds the line that most closely fits the data according to a specific mathematical criterion. For example, the method of ordinary least squares computes the unique line that minimizes the sum of squared differences between the true data and that line. For specific mathematical reasons, this allows the researcher to estimate the conditional expectation of the dependent variable when the independent variables take on a given set of values. Less common forms of regression use slightly different procedures to estimate alternative location parameters or estimate the conditional expectation across a broader collection of non-linear models.

<span class="mw-page-title-main">Data dredging</span> Misuse of data analysis

Data dredging is the misuse of data analysis to find patterns in data that can be presented as statistically significant, thus dramatically increasing and understating the risk of false positives. This is done by performing many statistical tests on the data and only reporting those that come back with significant results.

This glossary of statistics and probability is a list of definitions of terms and concepts used in the mathematical sciences of statistics and probability, their sub-disciplines, and related fields. For additional related terms, see Glossary of mathematics and Glossary of experimental design.

<span class="mw-page-title-main">Modifiable areal unit problem</span> Source of statistical bias

The modifiable areal unit problem (MAUP) is a source of statistical bias that can significantly impact the results of statistical hypothesis tests. MAUP affects results when point-based measures of spatial phenomena are aggregated into spatial partitions or areal units as in, for example, population density or illness rates. The resulting summary values are influenced by both the shape and scale of the aggregation unit.

In statistics, resampling is the creation of new samples based on one observed sample. Resampling methods are:

  1. Permutation tests
  2. Bootstrapping
  3. Cross validation

In statistics, the Durbin–Watson statistic is a test statistic used to detect the presence of autocorrelation at lag 1 in the residuals from a regression analysis. It is named after James Durbin and Geoffrey Watson. The small sample distribution of this ratio was derived by John von Neumann. Durbin and Watson applied this statistic to the residuals from least squares regressions, and developed bounds tests for the null hypothesis that the errors are serially uncorrelated against the alternative that they follow a first order autoregressive process. Note that the distribution of this test statistic does not depend on the estimated regression coefficients and the variance of the errors.

<span class="mw-page-title-main">Douglas R. White</span> Social scientist

Douglas R. White was an American complexity researcher, social anthropologist, sociologist, and social network researcher at the University of California, Irvine.

The Standard Cross-Cultural Sample (SCCS) is a sample of 186 cultures used by scholars engaged in cross-cultural studies.

Bootstrapping is any test or metric that uses random sampling with replacement, and falls under the broader class of resampling methods. Bootstrapping assigns measures of accuracy to sample estimates. This technique allows estimation of the sampling distribution of almost any statistic using random sampling methods.

The following outline is provided as an overview of and topical guide to regression analysis:

In statistics, the Sobel test is a method of testing the significance of a mediation effect. The test is based on the work of Michael E. Sobel, and is an application of the delta method. In mediation, the relationship between the independent variable and the dependent variable is hypothesized to be an indirect effect that exists due to the influence of a third variable. As a result when the mediator is included in a regression analysis model with the independent variable, the effect of the independent variable is reduced and the effect of the mediator remains significant. The Sobel test is basically a specialized t test that provides a method to determine whether the reduction in the effect of the independent variable, after including the mediator in the model, is a significant reduction and therefore whether the mediation effect is statistically significant.

<span class="mw-page-title-main">Homoscedasticity and heteroscedasticity</span> Statistical property

In statistics, a sequence of random variables is homoscedastic if all its random variables have the same finite variance; this is also known as homogeneity of variance. The complementary notion is called heteroscedasticity, also known as heterogeneity of variance. The spellings homoskedasticity and heteroskedasticity are also frequently used. Assuming a variable is homoscedastic when in reality it is heteroscedastic results in unbiased but inefficient point estimates and in biased estimates of standard errors, and may result in overestimating the goodness of fit as measured by the Pearson coefficient.

References

  1. Stocking, George W. Jr. (1968). "Edward Burnett Tylor." International Encyclopedia of the Social Sciences. David L. Sills, editor, New York, Mcmillan Company: v.16, pp. 170–177.
  2. Raoul Naroll (1961). "Two solutions to Galton's Problem". Philosophy of Science . 28: 15–29. doi:10.1086/287778. S2CID   121671403.
  3. Raoul Naroll (1965). "Galton's problem: The logic of cross cultural research". Social Research . 32: 428–451.
  4. "Sample Size and Design Effect" (PDF). Archived from the original (PDF) on 2006-04-14. Retrieved 2006-11-01.
  5. George P. Murdock and Douglas R. White (1969). "Standard cross-cultural sample". Ethnology . 9: 329–369.
  6. Jahn, Detlef (2006). "Globalization as Galton's Problem: The Missing Link in the Analysis of the Diffusion Patterns in Welfare State Development" (PDF). International Organization . 60 (2): 401–431. doi:10.1017/s0020818306060127. S2CID   154976704. abstract
  7. Cliff, A.D., and J.K. Ord. 1973. Spatial Autocorrelation. London: Pion Press.
  8. Cliff, A.D. and J.K. Ord. 1981. Spatial Processes. London: Pion Press.
  9. Pryor, Frederick (1976). "The Diffusion Possibility Method: A More General and Simpler Solution to Galton's Problem". American Ethnologist. American Anthropological Association. 3 (4): 731–749. doi:10.1525/ae.1976.3.4.02a00100.
  10. Malcolm M. Dow, Michael L. Burton, Douglas R. White, and Karl P. Reitz (1984). "Galton's problem as network autocorrelation". American Ethnologist . 11 (4): 754–770. doi:10.1525/ae.1984.11.4.02a00080. S2CID   143111431.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  11. E. Anthon Eff (2004). "Does Mr. Galton still have a Problem? Autocorrelation in the Standard Cross-Cultural Sample" (PDF). World Cultures. 15 (2): 153–170.
  12. Anselin, Luc. 1988. Spatial Econometrics: Methods and Models. Dordrecht: Kluwer Academic Publishers.
  13. Andrey Korotayev and Victor de Munck (2003). "Galton's Asset and Flower's Problem: Cultural Networks and Cultural Units in Cross-Cultural Research". American Anthropologist . 105 (2): 353–358. doi:10.1525/aa.2003.105.2.353.
  14. Mace, Ruth; Pagel, Mark (1994). "The Comparative Method in Anthropology". Current Anthropology . 35 (5): 549–564. doi:10.1086/204317. S2CID   146297584.
  15. 1 2 Malcolm M. Dow (1993). "Saving the theory: on chi-square tests with cross-cultural survey data". Cross-Cultural Research . 27 (3–4): 247–276. doi:10.1177/106939719302700305. S2CID   122509821.

Further reading