Exploratory data analysis

Last updated
Exploratory Data Analysis: Unveiling Insights into Edge Intelligence Enhancement. In this comprehensive exploration, the graph traces the trajectories of two curves - one representing the quantitative assessment model for edge intelligence enhancement, and the other showcasing actual test results. Both embark from the origin (0,1) and converge meaningfully at (80,70), indicating a shared comprehensive proportion during this pivotal phase. Intriguingly, as the data unfolds beyond this point, a discernible divergence emerges. The Edge Intelligence Enhancement Model consistently surpasses actual test results, revealing a compelling reserve in comprehensive proportions. This nuanced visual narrative provides valuable insights into the intricate dynamics between modeled predictions and empirical outcomes, underscoring the significance of exploratory data analysis in unraveling the complexities of enhanced edge intelligence. Optimizing edge intelligence.png
Exploratory Data Analysis: Unveiling Insights into Edge Intelligence Enhancement. In this comprehensive exploration, the graph traces the trajectories of two curves - one representing the quantitative assessment model for edge intelligence enhancement, and the other showcasing actual test results. Both embark from the origin (0,1) and converge meaningfully at (80,70), indicating a shared comprehensive proportion during this pivotal phase. Intriguingly, as the data unfolds beyond this point, a discernible divergence emerges. The Edge Intelligence Enhancement Model consistently surpasses actual test results, revealing a compelling reserve in comprehensive proportions. This nuanced visual narrative provides valuable insights into the intricate dynamics between modeled predictions and empirical outcomes, underscoring the significance of exploratory data analysis in unraveling the complexities of enhanced edge intelligence.

In statistics, exploratory data analysis (EDA) is an approach of analyzing data sets to summarize their main characteristics, often using statistical graphics and other data visualization methods. A statistical model can be used or not, but primarily EDA is for seeing what the data can tell us beyond the formal modeling and thereby contrasts traditional hypothesis testing. Exploratory data analysis has been promoted by John Tukey since 1970 to encourage statisticians to explore the data, and possibly formulate hypotheses that could lead to new data collection and experiments. EDA is different from initial data analysis (IDA), [1] [2] which focuses more narrowly on checking assumptions required for model fitting and hypothesis testing, and handling missing values and making transformations of variables as needed. EDA encompasses IDA.

Contents

Overview

Tukey defined data analysis in 1961 as: "Procedures for analyzing data, techniques for interpreting the results of such procedures, ways of planning the gathering of data to make its analysis easier, more precise or more accurate, and all the machinery and results of (mathematical) statistics which apply to analyzing data." [3]

Exploratory data analysis is an analysis technique to analyze and investigate the data set and summarize the main characteristics of the dataset. Main advantage of EDA is providing the data visualization of data after conducting the analysis.

Tukey's championing of EDA encouraged the development of statistical computing packages, especially S at Bell Labs. [4] The S programming language inspired the systems S-PLUS and R. This family of statistical-computing environments featured vastly improved dynamic visualization capabilities, which allowed statisticians to identify outliers, trends and patterns in data that merited further study.

Tukey's EDA was related to two other developments in statistical theory: robust statistics and nonparametric statistics, both of which tried to reduce the sensitivity of statistical inferences to errors in formulating statistical models. Tukey promoted the use of five number summary of numerical data—the two extremes (maximum and minimum), the median, and the quartiles—because these median and quartiles, being functions of the empirical distributionare defined for all distributions, unlike the mean and standard deviation; moreover, the quartiles and median are more robust to skewed or heavy-tailed distributions than traditional summaries (the mean and standard deviation). The packages S, S-PLUS, and R included routines using resampling statistics, such as Quenouille and Tukey's jackknife and Efron 's bootstrap, which are nonparametric and robust (for many problems).

Exploratory data analysis, robust statistics, nonparametric statistics, and the development of statistical programming languages facilitated statisticians' work on scientific and engineering problems. Such problems included the fabrication of semiconductors and the understanding of communications networks, which concerned Bell Labs. These statistical developments, all championed by Tukey, were designed to complement the analytic theory of testing statistical hypotheses, particularly the Laplacian tradition's emphasis on exponential families. [5]

Development

Data science process flowchart Data visualization process v1.png
Data science process flowchart

John W. Tukey wrote the book Exploratory Data Analysis in 1977. [6] Tukey held that too much emphasis in statistics was placed on statistical hypothesis testing (confirmatory data analysis); more emphasis needed to be placed on using data to suggest hypotheses to test. In particular, he held that confusing the two types of analyses and employing them on the same set of data can lead to systematic bias owing to the issues inherent in testing hypotheses suggested by the data.

The objectives of EDA are to:

Many EDA techniques have been adopted into data mining. They are also being taught to young students as a way to introduce them to statistical thinking. [8]

Techniques and tools

There are a number of tools that are useful for EDA, but EDA is characterized more by the attitude taken than by particular techniques. [9]

Typical graphical techniques used in EDA are:

Dimensionality reduction:

Typical quantitative techniques are:

History

Many EDA ideas can be traced back to earlier authors, for example:

The Open University course Statistics in Society (MDST 242), took the above ideas and merged them with Gottfried Noether's work, which introduced statistical inference via coin-tossing and the median test.

Example

Findings from EDA are orthogonal to the primary analysis task. To illustrate, consider an example from Cook et al. where the analysis task is to find the variables which best predict the tip that a dining party will give to the waiter. [12] The variables available in the data collected for this task are: the tip amount, total bill, payer gender, smoking/non-smoking section, time of day, day of the week, and size of the party. The primary analysis task is approached by fitting a regression model where the tip rate is the response variable. The fitted model is

(tip rate) = 0.18 - 0.01 × (party size)

which says that as the size of the dining party increases by one person (leading to a higher bill), the tip rate will decrease by 1%, on average.

However, exploring the data reveals other interesting features not described by this model.

What is learned from the plots is different from what is illustrated by the regression model, even though the experiment was not designed to investigate any of these other trends. The patterns found by exploring the data suggest hypotheses about tipping that may not have been anticipated in advance, and which could lead to interesting follow-up experiments where the hypotheses are formally stated and tested by collecting new data.

Software

See also

Related Research Articles

<span class="mw-page-title-main">Interquartile range</span> Measure of statistical dispersion

In descriptive statistics, the interquartile range (IQR) is a measure of statistical dispersion, which is the spread of the data. The IQR may also be called the midspread, middle 50%, fourth spread, or H‑spread. It is defined as the difference between the 75th and 25th percentiles of the data. To calculate the IQR, the data set is divided into quartiles, or four rank-ordered even parts via linear interpolation. These quartiles are denoted by Q1 (also called the lower quartile), Q2 (the median), and Q3 (also called the upper quartile). The lower quartile corresponds with the 25th percentile and the upper quartile corresponds with the 75th percentile, so IQR = Q3 − Q1.

<span class="mw-page-title-main">Box plot</span> Data visualization

In descriptive statistics, a box plot or boxplot is a method for graphically demonstrating the locality, spread and skewness groups of numerical data through their quartiles. In addition to the box on a box plot, there can be lines extending from the box indicating variability outside the upper and lower quartiles, thus, the plot is also called the box-and-whisker plot and the box-and-whisker diagram. Outliers that differ significantly from the rest of the dataset may be plotted as individual points beyond the whiskers on the box-plot. Box plots are non-parametric: they display variation in samples of a statistical population without making any assumptions of the underlying statistical distribution. The spacings in each subsection of the box-plot indicate the degree of dispersion (spread) and skewness of the data, which are usually described using the five-number summary. In addition, the box-plot allows one to visually estimate various L-estimators, notably the interquartile range, midhinge, range, mid-range, and trimean. Box plots can be drawn either horizontally or vertically.

The five-number summary is a set of descriptive statistics that provides information about a dataset. It consists of the five most important sample percentiles:

  1. the sample minimum (smallest observation)
  2. the lower quartile or first quartile
  3. the median
  4. the upper quartile or third quartile
  5. the sample maximum

Nonparametric statistics is a type of statistical analysis that makes minimal assumptions about the underlying distribution of the data being studied. Often these models are infinite-dimensional, rather than finite dimensional, as is parametric statistics. Nonparametric statistics can be used for descriptive statistics or statistical inference. Nonparametric tests are often used when the assumptions of parametric tests are evidently violated.

<span class="mw-page-title-main">John Tukey</span> American mathematician

John Wilder Tukey was an American mathematician and statistician, best known for the development of the fast Fourier Transform (FFT) algorithm and box plot. The Tukey range test, the Tukey lambda distribution, the Tukey test of additivity, and the Teichmüller–Tukey lemma all bear his name. He is also credited with coining the term bit and the first published use of the word software.

Uncomfortable science, as identified by statistician John Tukey, comprises situations in which there is a need to draw an inference from a limited sample of data, where further samples influenced by the same cause system will not be available. More specifically, it involves the analysis of a finite natural phenomenon for which it is difficult to overcome the problem of using a common sample of data for both exploratory data analysis and confirmatory data analysis. This leads to the danger of systematic bias through testing hypotheses suggested by the data.

XLispStat is a statistical scientific package based on the XLISP language.

Geovisualization or geovisualisation, also known as cartographic visualization, refers to a set of tools and techniques supporting the analysis of geospatial data through the use of interactive visualization.

<span class="mw-page-title-main">Data and information visualization</span> Visual representation of data

Data and information visualization is the practice of designing and creating easy-to-communicate and easy-to-understand graphic or visual representations of a large amount of complex quantitative and qualitative data and information with the help of static, dynamic or interactive visual items. Typically based on data and information collected from a certain domain of expertise, these visualizations are intended for a broader audience to help them visually explore and discover, quickly understand, interpret and gain important insights into otherwise difficult-to-identify structures, relationships, correlations, local and global patterns, trends, variations, constancy, clusters, outliers and unusual groupings within data. When intended for the general public to convey a concise version of known, specific information in a clear and engaging manner, it is typically called information graphics.

In statistics the trimean (TM), or Tukey's trimean, is a measure of a probability distribution's location defined as a weighted average of the distribution's median and its two quartiles:

<span class="mw-page-title-main">Anscombe's quartet</span> Four data sets with the same descriptive statistics, yet very different distributions

Anscombe's quartet comprises four data sets that have nearly identical simple descriptive statistics, yet have very different distributions and appear very different when graphed. Each dataset consists of eleven (xy) points. They were constructed in 1973 by the statistician Francis Anscombe to demonstrate both the importance of graphing data when analyzing it, and the effect of outliers and other influential observations on statistical properties. He described the article as being intended to counter the impression among statisticians that "numerical calculations are exact, but graphs are rough".

In statistics, the median absolute deviation (MAD) is a robust measure of the variability of a univariate sample of quantitative data. It can also refer to the population parameter that is estimated by the MAD calculated from a sample.

The median polish is a simple and robust exploratory data analysis procedure proposed by the statistician John Tukey. The purpose of median polish is to find an additively-fit model for data in a two-way layout table of the form row effect + column effect + overall median.

GGobi is a free statistical software tool for interactive data visualization. GGobi allows extensive exploration of the data with Interactive dynamic graphics. It is also a tool for looking at multivariate data. R can be used in sync with GGobi. The GGobi software can be embedded as a library in other programs and program packages using an application programming interface (API) or as an add-on to existing languages and scripting environments, e.g., with the R command line or from a Perl or Python scripts. GGobi prides itself on its ability to link multiple graphs together.

<span class="mw-page-title-main">L-estimator</span>

In statistics, an L-estimator is an estimator which is a linear combination of order statistics of the measurements. This can be as little as a single point, as in the median, or as many as all points, as in the mean.

<span class="mw-page-title-main">Visual analytics</span>

Visual analytics is an outgrowth of the fields of information visualization and scientific visualization that focuses on analytical reasoning facilitated by interactive visual interfaces.

Data Desk is a software program for visual data analysis, visual data exploration, and statistics. It carries out Exploratory Data Analysis (EDA) and standard statistical analyses by means of dynamically linked graphic data displays that update any change simultaneously.

Heike Hofmann is a statistician and Professor in the Department of Statistics at Iowa State University.

Dianne Helen Cook is an Australian statistician, the editor of the Journal of Computational and Graphical Statistics, and an expert on the visualization of high-dimensional data. She is Professor of Business Analytics in the Department of Econometrics and Business Statistics at Monash University and professor emeritus of statistics at Iowa State University. The emeritus status was chosen so that she could continue to supervise graduate students at Iowa State after moving to Australia.

Andreas Buja is a Swiss statistician and professor of statistics. He is the Liem Sioe Liong/First Pacific Company professor in the Statistics department of The Wharton School at the University of Pennsylvania in Philadelphia, United States. Buja joined Center for Computational Mathematics (CCM) as a Senior Research Scientist in January 2020.

References

  1. Chatfield, C. (1995). Problem Solving: A Statistician's Guide (2nd ed.). Chapman and Hall. ISBN   978-0412606304.
  2. Baillie, Mark; Le Cessie, Saskia; Schmidt, Carsten Oliver; Lusa, Lara; Huebner, Marianne; Topic Group "Initial Data Analysis" of the STRATOS Initiative (2022). "Ten simple rules for initial data analysis". PLOS Computational Biology. 18 (2): e1009819. Bibcode:2022PLSCB..18E9819B. doi: 10.1371/journal.pcbi.1009819 . PMC   8870512 . PMID   35202399.
  3. John Tukey-The Future of Data Analysis-July 1961
  4. Becker, Richard A., A Brief History of S, Murray Hill, New Jersey: AT&T Bell Laboratories, archived from the original (PS) on 2015-07-23, retrieved 2015-07-23, ... we wanted to be able to interact with our data, using Exploratory Data Analysis (Tukey, 1971) techniques.
  5. Morgenthaler, Stephan; Fernholz, Luisa T. (2000). "Conversation with John W. Tukey and Elizabeth Tukey, Luisa T. Fernholz and Stephan Morgenthaler". Statistical Science. 15 (1): 79–94. doi: 10.1214/ss/1009212675 .
  6. Tukey, John W. (1977). Exploratory Data Analysis. Pearson. ISBN   978-0201076165.
  7. Behrens-Principles and Procedures of Exploratory Data Analysis-American Psychological Association-1997
  8. Konold, C. (1999). "Statistics goes to school". Contemporary Psychology. 44 (1): 81–82. doi:10.1037/001949.
  9. Tukey, John W. (1980). "We need both exploratory and confirmatory". The American Statistician. 34 (1): 23–25. doi:10.1080/00031305.1980.10482706.
  10. Sailem, Heba Z.; Sero, Julia E.; Bakal, Chris (2015-01-08). "Visualizing cellular imaging data using PhenoPlot". Nature Communications. 6 (1): 5825. Bibcode:2015NatCo...6.5825S. doi:10.1038/ncomms6825. ISSN   2041-1723. PMC   4354266 . PMID   25569359.
  11. Elementary Manual of Statistics (3rd edn., 1920)https://archive.org/details/cu31924013702968/page/n5
  12. Cook, D. and Swayne, D.F. (with A. Buja, D. Temple Lang, H. Hofmann, H. Wickham, M. Lawrence) (2007) "Interactive and Dynamic Graphics for Data Analysis: With R and GGobi" Springer, 978-0387717616

Bibliography

Andrienko, N & Andrienko, G (2005) Exploratory Analysis of Spatial and Temporal Data. A Systematic Approach. Springer. ISBN 3-540-25994-5 Cook, D. and Swayne, D.F. (with A. Buja, D. Temple Lang, H. Hofmann, H. Wickham, M. Lawrence) (2007-12-12). Interactive and Dynamic Graphics for Data Analysis: With R and GGobi. Springer. ISBN 9780387717616. Hoaglin, D C; Mosteller, F & Tukey, John Wilder (Eds) (1985). Exploring Data Tables, Trends and Shapes. ISBN 978-0-471-09776-1. Hoaglin, D C; Mosteller, F & Tukey, John Wilder (Eds) (1983). Understanding Robust and Exploratory Data Analysis. ISBN 978-0-471-09777-8. Young, F. W. Valero-Mora, P. and Friendly M. (2006) Visual Statistics: Seeing your data with Dynamic Interactive Graphics. Wiley ISBN 978-0-471-68160-1 Jambu M. (1991) Exploratory and Multivariate Data Analysis. Academic Press ISBN 0123800900 S. H. C. DuToit, A. G. W. Steyn, R. H. Stumpf (1986) Graphical Exploratory Data Analysis. Springer ISBN 978-1-4612-9371-2