Iris flower data set

Last updated
Scatterplot of the data set Iris dataset scatterplot.svg
Scatterplot of the data set

The Iris flower data set or Fisher's Iris data set is a multivariate data set used and made famous by the British statistician and biologist Ronald Fisher in his 1936 paper The use of multiple measurements in taxonomic problems as an example of linear discriminant analysis. [1] It is sometimes called Anderson's Iris data set because Edgar Anderson collected the data to quantify the morphologic variation of Iris flowers of three related species. [2] Two of the three species were collected in the Gaspé Peninsula "all from the same pasture, and picked on the same day and measured at the same time by the same person with the same apparatus". [3]

Contents

The data set consists of 50 samples from each of three species of Iris ( Iris setosa , Iris virginica and Iris versicolor ). Four features were measured from each sample: the length and the width of the sepals and petals, in centimeters. Based on the combination of these four features, Fisher developed a linear discriminant model to distinguish each species. Fisher's paper was published in the Annals of Eugenics (today the Annals of Human Genetics). [1]

Use of the data set

Unsatisfactory k-means clustering (the data cannot be clustered into the known classes) and actual species visualized using ELKI Iris Flowers Clustering kMeans.svg
Unsatisfactory k-means clustering (the data cannot be clustered into the known classes) and actual species visualized using ELKI
An example of the so-called "metro map" for the Iris data set Only a small fraction of Iris-virginica is mixed with Iris-versicolor. All other samples of the different Iris species belong to the different nodes. Principal tree for Iris data set.png
An example of the so-called "metro map" for the Iris data set Only a small fraction of Iris-virginica is mixed with Iris-versicolor. All other samples of the different Iris species belong to the different nodes.

Originally used as an example data set on which Fisher's linear discriminant analysis was applied, it became a typical test case for many statistical classification techniques in machine learning such as support vector machines. [5]

The use of this data set in cluster analysis however is not common, since the data set only contains two clusters with rather obvious separation. One of the clusters contains Iris setosa, while the other cluster contains both Iris virginica and Iris versicolor and is not separable without the species information Fisher used. This makes the data set a good example to explain the difference between supervised and unsupervised techniques in data mining: Fisher's linear discriminant model can only be obtained when the object species are known: class labels and clusters are not necessarily the same. [6]

Nevertheless, all three species of Iris are separable in the projection on the nonlinear and branching principal component. [7] The data set is approximated by the closest tree with some penalty for the excessive number of nodes, bending and stretching. Then the so-called "metro map" is constructed. [4] The data points are projected into the closest node. For each node the pie diagram of the projected points is prepared. The area of the pie is proportional to the number of the projected points. It is clear from the diagram (left) that the absolute majority of the samples of the different Iris species belong to the different nodes. Only a small fraction of Iris-virginica is mixed with Iris-versicolor (the mixed blue-green nodes in the diagram). Therefore, the three species of Iris (Iris setosa, Iris virginica and Iris versicolor) are separable by the unsupervising procedures of nonlinear principal component analysis. To discriminate them, it is sufficient just to select the corresponding nodes on the principal tree.

Data set

Iris setosa Kosaciec szczecinkowaty Iris setosa.jpg
Iris setosa

The dataset contains a set of 150 records under five attributes: sepal length, sepal width, petal length, petal width and species.

Iris versicolor Iris versicolor 3.jpg
Iris versicolor
Iris virginica Iris virginica.jpg
Iris virginica
Spectramap biplot of Fisher's iris data set Spectramap Biplot Iris Flower Data Set FULL.jpg
Spectramap biplot of Fisher's iris data set
Fisher's Iris data
Dataset orderSepal lengthSepal widthPetal lengthPetal widthSpecies
15.13.51.40.2I. setosa
24.93.01.40.2I. setosa
34.73.21.30.2I. setosa
44.63.11.50.2I. setosa
55.03.61.40.3I. setosa
65.43.91.70.4I. setosa
74.63.41.40.3I. setosa
85.03.41.50.2I. setosa
94.42.91.40.2I. setosa
104.93.11.50.1I. setosa
115.43.71.50.2I. setosa
124.83.41.60.2I. setosa
134.83.01.40.1I. setosa
144.33.01.10.1I. setosa
155.84.01.20.2I. setosa
165.74.41.50.4I. setosa
175.43.91.30.4I. setosa
185.13.51.40.3I. setosa
195.73.81.70.3I. setosa
205.13.81.50.3I. setosa
215.43.41.70.2I. setosa
225.13.71.50.4I. setosa
234.63.61.00.2I. setosa
245.13.31.70.5I. setosa
254.83.41.90.2I. setosa
265.03.01.60.2I. setosa
275.03.41.60.4I. setosa
285.23.51.50.2I. setosa
295.23.41.40.2I. setosa
304.73.21.60.2I. setosa
314.83.11.60.2I. setosa
325.43.41.50.4I. setosa
335.24.11.50.1I. setosa
345.54.21.40.2I. setosa
354.93.11.50.2I. setosa
365.03.21.20.2I. setosa
375.53.51.30.2I. setosa
384.93.61.40.1I. setosa
394.43.01.30.2I. setosa
405.13.41.50.2I. setosa
415.03.51.30.3I. setosa
424.52.31.30.3I. setosa
434.43.21.30.2I. setosa
445.03.51.60.6I. setosa
455.13.81.90.4I. setosa
464.83.01.40.3I. setosa
475.13.81.60.2I. setosa
484.63.21.40.2I. setosa
495.33.71.50.2I. setosa
505.03.31.40.2I. setosa
517.03.24.71.4I. versicolor
526.43.24.51.5I. versicolor
536.93.14.91.5I. versicolor
545.52.34.01.3I. versicolor
556.52.84.61.5I. versicolor
565.72.84.51.3I. versicolor
576.33.34.71.6I. versicolor
584.92.43.31.0I. versicolor
596.62.94.61.3I. versicolor
605.22.73.91.4I. versicolor
615.02.03.51.0I. versicolor
625.93.04.21.5I. versicolor
636.02.24.01.0I. versicolor
646.12.94.71.4I. versicolor
655.62.93.61.3I. versicolor
666.73.14.41.4I. versicolor
675.63.04.51.5I. versicolor
685.82.74.11.0I. versicolor
696.22.24.51.5I. versicolor
705.62.53.91.1I. versicolor
715.93.24.81.8I. versicolor
726.12.84.01.3I. versicolor
736.32.54.91.5I. versicolor
746.12.84.71.2I. versicolor
756.42.94.31.3I. versicolor
766.63.04.41.4I. versicolor
776.82.84.81.4I. versicolor
786.73.05.01.7I. versicolor
796.02.94.51.5I. versicolor
805.72.63.51.0I. versicolor
815.52.43.81.1I. versicolor
825.52.43.71.0I. versicolor
835.82.73.91.2I. versicolor
846.02.75.11.6I. versicolor
855.43.04.51.5I. versicolor
866.03.44.51.6I. versicolor
876.73.14.71.5I. versicolor
886.32.34.41.3I. versicolor
895.63.04.11.3I. versicolor
905.52.54.01.3I. versicolor
915.52.64.41.2I. versicolor
926.13.04.61.4I. versicolor
935.82.64.01.2I. versicolor
945.02.33.31.0I. versicolor
955.62.74.21.3I. versicolor
965.73.04.21.2I. versicolor
975.72.94.21.3I. versicolor
986.22.94.31.3I. versicolor
995.12.53.01.1I. versicolor
1005.72.84.11.3I. versicolor
1016.33.36.02.5I. virginica
1025.82.75.11.9I. virginica
1037.13.05.92.1I. virginica
1046.32.95.61.8I. virginica
1056.53.05.82.2I. virginica
1067.63.06.62.1I. virginica
1074.92.54.51.7I. virginica
1087.32.96.31.8I. virginica
1096.72.55.81.8I. virginica
1107.23.66.12.5I. virginica
1116.53.25.12.0I. virginica
1126.42.75.31.9I. virginica
1136.83.05.52.1I. virginica
1145.72.55.02.0I. virginica
1155.82.85.12.4I. virginica
1166.43.25.32.3I. virginica
1176.53.05.51.8I. virginica
1187.73.86.72.2I. virginica
1197.72.66.92.3I. virginica
1206.02.25.01.5I. virginica
1216.93.25.72.3I. virginica
1225.62.84.92.0I. virginica
1237.72.86.72.0I. virginica
1246.32.74.91.8I. virginica
1256.73.35.72.1I. virginica
1267.23.26.01.8I. virginica
1276.22.84.81.8I. virginica
1286.13.04.91.8I. virginica
1296.42.85.62.1I. virginica
1307.23.05.81.6I. virginica
1317.42.86.11.9I. virginica
1327.93.86.42.0I. virginica
1336.42.85.62.2I. virginica
1346.32.85.11.5I. virginica
1356.12.65.61.4I. virginica
1367.73.06.12.3I. virginica
1376.33.45.62.4I. virginica
1386.43.15.51.8I. virginica
1396.03.04.81.8I. virginica
1406.93.15.42.1I. virginica
1416.73.15.62.4I. virginica
1426.93.15.12.3I. virginica
1435.82.75.11.9I. virginica
1446.83.25.92.3I. virginica
1456.73.35.72.5I. virginica
1466.73.05.22.3I. virginica
1476.32.55.01.9I. virginica
1486.53.05.22.0I. virginica
1496.23.45.42.3I. virginica
1505.93.05.11.8I. virginica

The iris data set is widely used as a beginner's dataset for machine learning purposes. The dataset is included in R base and Python in the machine learning library scikit-learn, so that users can access it without having to find a source for it.

Several versions of the dataset have been published. [8]

R code illustrating usage

The example R code shown below reproduce the scatterplot displayed at the top of this article:

# Show the datasetiris# Show the help page, with information about the dataset?iris# Create scatterplots of all pairwise combination of the 4 variables in the datasetpairs(iris[1:4],main="Iris Data (red=setosa,green=versicolor,blue=virginica)",pch=21,bg=c("red","green3","blue")[unclass(iris$Species)])

Python code illustrating usage

fromsklearn.datasetsimportload_irisiris=load_iris()iris

This code gives:

{'data':array([[5.1,3.5,1.4,0.2],[4.9,3.,1.4,0.2],[4.7,3.2,1.3,0.2],[4.6,3.1,1.5,0.2],...'target':array([0,0,0,...1,1,1,...2,2,2,...'target_names':array(['setosa','versicolor','virginica'],dtype='<U10'),...}

See also

Related Research Articles

<span class="mw-page-title-main">Data set</span> Collection of data

A data set is a collection of data. In the case of tabular data, a data set corresponds to one or more database tables, where every column of a table represents a particular variable, and each row corresponds to a given record of the data set in question. The data set lists values for each of the variables, such as for example height and weight of an object, for each member of the data set. Data sets can also consist of a collection of documents or files.

Multivariate statistics is a subdivision of statistics encompassing the simultaneous observation and analysis of more than one outcome variable, i.e., multivariate random variables. Multivariate statistics concerns understanding the different aims and background of each of the different forms of multivariate analysis, and how they relate to each other. The practical application of multivariate statistics to a particular problem may involve several types of univariate and multivariate analyses in order to understand the relationships between variables and their relevance to the problem being studied.

<i>Iris</i> (plant) Genus of flowering plants in the family Iridaceae

Iris is a flowering plant genus of 310 accepted species with showy flowers. As well as being the scientific name, iris is also widely used as a common name for all Iris species, as well as some belonging to other closely related genera. A common name for some species is flags, while the plants of the subgenus Scorpiris are widely known as junos, particularly in horticulture. It is a popular garden flower.

<span class="mw-page-title-main">Principal component analysis</span> Method of data analysis

Principal component analysis (PCA) is a linear dimensionality reduction technique with applications in exploratory data analysis, visualization and data preprocessing.

<span class="mw-page-title-main">Self-organizing map</span> Machine learning technique useful for dimensionality reduction

A self-organizing map (SOM) or self-organizing feature map (SOFM) is an unsupervised machine learning technique used to produce a low-dimensional representation of a higher-dimensional data set while preserving the topological structure of the data. For example, a data set with variables measured in observations could be represented as clusters of observations with similar values for the variables. These clusters then could be visualized as a two-dimensional "map" such that observations in proximal clusters have more similar values than observations in distal clusters. This can make high-dimensional data easier to visualize and analyze.

In machine learning, the perceptron is an algorithm for supervised learning of binary classifiers. A binary classifier is a function which can decide whether or not an input, represented by a vector of numbers, belongs to some specific class. It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector.

<span class="mw-page-title-main">Nonlinear dimensionality reduction</span> Projection of data onto lower-dimensional manifolds

Nonlinear dimensionality reduction, also known as manifold learning, is any of various related techniques that aim to project high-dimensional data, potentially existing across non-linear manifolds which cannot be adequately captured by linear decomposition methods, onto lower-dimensional latent manifolds, with the goal of either visualizing the data in the low-dimensional space, or learning the mapping itself. The techniques described below can be understood as generalizations of linear decomposition methods used for dimensionality reduction, such as singular value decomposition and principal component analysis.

Dimensionality reduction, or dimension reduction, is the transformation of data from a high-dimensional space into a low-dimensional space so that the low-dimensional representation retains some meaningful properties of the original data, ideally close to its intrinsic dimension. Working in high-dimensional spaces can be undesirable for many reasons; raw data are often sparse as a consequence of the curse of dimensionality, and analyzing the data is usually computationally intractable. Dimensionality reduction is common in fields that deal with large numbers of observations and/or large numbers of variables, such as signal processing, speech recognition, neuroinformatics, and bioinformatics.

<span class="mw-page-title-main">Linear discriminant analysis</span> Method used in statistics, pattern recognition, and other fields

Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification.

When classification is performed by a computer, statistical methods are normally used to develop the algorithm.

<span class="mw-page-title-main">Tab-separated values</span> Text file format

Tab-separated values (TSV) is a simple, text-based file format for storing tabular data. Records are separated by newlines, and values within a record are separated by tab characters. The TSV format is thus a delimiter-separated values format, similar to comma-separated values.

<i>Iris versicolor</i> Species of plant

Iris versicolor is also commonly known as the blue flag, harlequin blueflag, larger blue flag, northern blue flag, and poison flag, plus other variations of these names, and in Great Britain and Ireland as purple iris.

Optimal Discriminant Analysis (ODA) and the related classification tree analysis (CTA) are exact statistical methods that maximize predictive accuracy. For any specific sample and exploratory or confirmatory hypothesis, optimal discriminant analysis (ODA) identifies the statistical model that yields maximum predictive accuracy, assesses the exact Type I error rate, and evaluates potential cross-generalizability. Optimal discriminant analysis may be applied to > 0 dimensions, with the one-dimensional case being referred to as UniODA and the multidimensional case being referred to as MultiODA. Optimal discriminant analysis is an alternative to ANOVA and regression analysis.

<span class="mw-page-title-main">Elastic map</span>

Elastic maps provide a tool for nonlinear dimensionality reduction. By their construction, they are a system of elastic springs embedded in the data space. This system approximates a low-dimensional manifold. The elastic coefficients of this system allow the switch from completely unstructured k-means clustering to the estimators located closely to linear PCA manifolds. With some intermediate values of the elasticity coefficients, this system effectively approximates non-linear principal manifolds. This approach is based on a mechanical analogy between principal manifolds, that are passing through "the middle" of the data distribution, and elastic membranes and plates. The method was developed by A.N. Gorban, A.Y. Zinovyev and A.A. Pitenko in 1996–1998.

<i>Iris virginica</i> Species of flowering plant

Iris virginica, with the common name Virginia blueflag, Virginia iris, great blue flag, or southern blue flag, is a perennial species of flowering plant in the Iridaceae (iris) family, native to central and eastern North America.

mlpy is a Python, open-source, machine learning library built on top of NumPy/SciPy, the GNU Scientific Library and it makes an extensive use of the Cython language. mlpy provides a wide range of state-of-the-art machine learning methods for supervised and unsupervised problems and it is aimed at finding a reasonable compromise among modularity, maintainability, reproducibility, usability and efficiency. mlpy is multiplatform, it works with Python 2 and 3 and it is distributed under GPL3.

<span class="mw-page-title-main">Alexander Gorban</span> Russian-British scientist (born 1952)

Alexander Nikolaevich Gorban is a scientist of Russian origin, working in the United Kingdom. He is a professor at the University of Leicester, and director of its Mathematical Modeling Centre. Gorban has contributed to many areas of fundamental and applied science, including statistical physics, non-equilibrium thermodynamics, machine learning and mathematical biology.

The multi-surface method (MSM) is a form of decision making using the concept of piecewise-linear separability of datasets to categorize data.

<i>Iris setosa</i> Species of flowering plant

Iris setosa, the bristle-pointed iris, is a species of flowering plant in the genus Iris of the family Iridaceae, it belongs the subgenus Limniris and the series Tripetalae. It is a rhizomatous perennial from a wide range across the Arctic sea, including Alaska, Maine, Canada, Russia, northeastern Asia, China, Korea and southwards to Japan. The plant has tall branching stems, mid green leaves and violet, purple-blue, violet-blue, blue, to lavender flowers. There are also plants with pink and white flowers.

The following outline is provided as an overview of and topical guide to machine learning:

References

  1. 1 2 R. A. Fisher (1936). "The use of multiple measurements in taxonomic problems". Annals of Eugenics . 7 (2): 179–188. doi:10.1111/j.1469-1809.1936.tb02137.x. hdl: 2440/15227 .
  2. Edgar Anderson (1936). "The species problem in Iris". Annals of the Missouri Botanical Garden . 23 (3): 457–509. doi:10.2307/2394164. JSTOR   2394164.
  3. Edgar Anderson (1935). "The irises of the Gaspé Peninsula". Bulletin of the American Iris Society. 59: 2–5.
  4. 1 2 Gorban, A. N.; Zinovyev, A. (2010). "Principal manifolds and graphs in practice: from molecular biology to dynamical systems". International Journal of Neural Systems. 20 (3): 219–232. arXiv: 1001.1122 .
  5. "UCI Machine Learning Repository: Iris Data Set". archive.ics.uci.edu. Retrieved 2017-12-01.
  6. Ines Färber; Stephan Günnemann; Hans-Peter Kriegel; Peer Kröger; Emmanuel Müller; Erich Schubert; Thomas Seidl; Arthur Zimek (2010). "On Using Class-Labels in Evaluation of Clusterings" (PDF). In Xiaoli Z. Fern; Ian Davidson; Jennifer Dy (eds.). MultiClust: Discovering, Summarizing, and Using Multiple Clusterings. ACM SIGKDD.
  7. Gorban, A.N.; Sumner, N.R.; Zinovyev, A.Y. (2007). "Topological grammars for data approximation". Applied Mathematics Letters. 20 (4): 382–386.
  8. Bezdek, J.C.; Keller, J.M.; Krishnapuram, R.; Kuncheva, L.I.; Pal, N.R. (1999). "Will the real iris data please stand up?". IEEE Transactions on Fuzzy Systems. 7 (3): 368–369. doi:10.1109/91.771092.