The Iris flower data set or Fisher's Iris data set is a multivariate data set used and made famous by the British statistician and biologist Ronald Fisher in his 1936 paper The use of multiple measurements in taxonomic problems as an example of linear discriminant analysis. [1] It is sometimes called Anderson's Iris data set because Edgar Anderson collected the data to quantify the morphologic variation of Iris flowers of three related species. [2] Two of the three species were collected in the Gaspé Peninsula "all from the same pasture, and picked on the same day and measured at the same time by the same person with the same apparatus". [3]
The data set consists of 50 samples from each of three species of Iris ( Iris setosa , Iris virginica and Iris versicolor ). Four features were measured from each sample: the length and the width of the sepals and petals, in centimeters. Based on the combination of these four features, Fisher developed a linear discriminant model to distinguish each species. Fisher's paper was published in the Annals of Eugenics (today the Annals of Human Genetics). [1]
Originally used as an example data set on which Fisher's linear discriminant analysis was applied, it became a typical test case for many statistical classification techniques in machine learning such as support vector machines. [5]
The use of this data set in cluster analysis however is not common, since the data set only contains two clusters with rather obvious separation. One of the clusters contains Iris setosa, while the other cluster contains both Iris virginica and Iris versicolor and is not separable without the species information Fisher used. This makes the data set a good example to explain the difference between supervised and unsupervised techniques in data mining: Fisher's linear discriminant model can only be obtained when the object species are known: class labels and clusters are not necessarily the same. [6]
Nevertheless, all three species of Iris are separable in the projection on the nonlinear and branching principal component. [7] The data set is approximated by the closest tree with some penalty for the excessive number of nodes, bending and stretching. Then the so-called "metro map" is constructed. [4] The data points are projected into the closest node. For each node the pie diagram of the projected points is prepared. The area of the pie is proportional to the number of the projected points. It is clear from the diagram (left) that the absolute majority of the samples of the different Iris species belong to the different nodes. Only a small fraction of Iris-virginica is mixed with Iris-versicolor (the mixed blue-green nodes in the diagram). Therefore, the three species of Iris (Iris setosa, Iris virginica and Iris versicolor) are separable by the unsupervising procedures of nonlinear principal component analysis. To discriminate them, it is sufficient just to select the corresponding nodes on the principal tree.
The dataset contains a set of 150 records under five attributes: sepal length, sepal width, petal length, petal width and species.
Dataset order | Sepal length | Sepal width | Petal length | Petal width | Species |
---|---|---|---|---|---|
1 | 5.1 | 3.5 | 1.4 | 0.2 | I. setosa |
2 | 4.9 | 3.0 | 1.4 | 0.2 | I. setosa |
3 | 4.7 | 3.2 | 1.3 | 0.2 | I. setosa |
4 | 4.6 | 3.1 | 1.5 | 0.2 | I. setosa |
5 | 5.0 | 3.6 | 1.4 | 0.3 | I. setosa |
6 | 5.4 | 3.9 | 1.7 | 0.4 | I. setosa |
7 | 4.6 | 3.4 | 1.4 | 0.3 | I. setosa |
8 | 5.0 | 3.4 | 1.5 | 0.2 | I. setosa |
9 | 4.4 | 2.9 | 1.4 | 0.2 | I. setosa |
10 | 4.9 | 3.1 | 1.5 | 0.1 | I. setosa |
11 | 5.4 | 3.7 | 1.5 | 0.2 | I. setosa |
12 | 4.8 | 3.4 | 1.6 | 0.2 | I. setosa |
13 | 4.8 | 3.0 | 1.4 | 0.1 | I. setosa |
14 | 4.3 | 3.0 | 1.1 | 0.1 | I. setosa |
15 | 5.8 | 4.0 | 1.2 | 0.2 | I. setosa |
16 | 5.7 | 4.4 | 1.5 | 0.4 | I. setosa |
17 | 5.4 | 3.9 | 1.3 | 0.4 | I. setosa |
18 | 5.1 | 3.5 | 1.4 | 0.3 | I. setosa |
19 | 5.7 | 3.8 | 1.7 | 0.3 | I. setosa |
20 | 5.1 | 3.8 | 1.5 | 0.3 | I. setosa |
21 | 5.4 | 3.4 | 1.7 | 0.2 | I. setosa |
22 | 5.1 | 3.7 | 1.5 | 0.4 | I. setosa |
23 | 4.6 | 3.6 | 1.0 | 0.2 | I. setosa |
24 | 5.1 | 3.3 | 1.7 | 0.5 | I. setosa |
25 | 4.8 | 3.4 | 1.9 | 0.2 | I. setosa |
26 | 5.0 | 3.0 | 1.6 | 0.2 | I. setosa |
27 | 5.0 | 3.4 | 1.6 | 0.4 | I. setosa |
28 | 5.2 | 3.5 | 1.5 | 0.2 | I. setosa |
29 | 5.2 | 3.4 | 1.4 | 0.2 | I. setosa |
30 | 4.7 | 3.2 | 1.6 | 0.2 | I. setosa |
31 | 4.8 | 3.1 | 1.6 | 0.2 | I. setosa |
32 | 5.4 | 3.4 | 1.5 | 0.4 | I. setosa |
33 | 5.2 | 4.1 | 1.5 | 0.1 | I. setosa |
34 | 5.5 | 4.2 | 1.4 | 0.2 | I. setosa |
35 | 4.9 | 3.1 | 1.5 | 0.2 | I. setosa |
36 | 5.0 | 3.2 | 1.2 | 0.2 | I. setosa |
37 | 5.5 | 3.5 | 1.3 | 0.2 | I. setosa |
38 | 4.9 | 3.6 | 1.4 | 0.1 | I. setosa |
39 | 4.4 | 3.0 | 1.3 | 0.2 | I. setosa |
40 | 5.1 | 3.4 | 1.5 | 0.2 | I. setosa |
41 | 5.0 | 3.5 | 1.3 | 0.3 | I. setosa |
42 | 4.5 | 2.3 | 1.3 | 0.3 | I. setosa |
43 | 4.4 | 3.2 | 1.3 | 0.2 | I. setosa |
44 | 5.0 | 3.5 | 1.6 | 0.6 | I. setosa |
45 | 5.1 | 3.8 | 1.9 | 0.4 | I. setosa |
46 | 4.8 | 3.0 | 1.4 | 0.3 | I. setosa |
47 | 5.1 | 3.8 | 1.6 | 0.2 | I. setosa |
48 | 4.6 | 3.2 | 1.4 | 0.2 | I. setosa |
49 | 5.3 | 3.7 | 1.5 | 0.2 | I. setosa |
50 | 5.0 | 3.3 | 1.4 | 0.2 | I. setosa |
51 | 7.0 | 3.2 | 4.7 | 1.4 | I. versicolor |
52 | 6.4 | 3.2 | 4.5 | 1.5 | I. versicolor |
53 | 6.9 | 3.1 | 4.9 | 1.5 | I. versicolor |
54 | 5.5 | 2.3 | 4.0 | 1.3 | I. versicolor |
55 | 6.5 | 2.8 | 4.6 | 1.5 | I. versicolor |
56 | 5.7 | 2.8 | 4.5 | 1.3 | I. versicolor |
57 | 6.3 | 3.3 | 4.7 | 1.6 | I. versicolor |
58 | 4.9 | 2.4 | 3.3 | 1.0 | I. versicolor |
59 | 6.6 | 2.9 | 4.6 | 1.3 | I. versicolor |
60 | 5.2 | 2.7 | 3.9 | 1.4 | I. versicolor |
61 | 5.0 | 2.0 | 3.5 | 1.0 | I. versicolor |
62 | 5.9 | 3.0 | 4.2 | 1.5 | I. versicolor |
63 | 6.0 | 2.2 | 4.0 | 1.0 | I. versicolor |
64 | 6.1 | 2.9 | 4.7 | 1.4 | I. versicolor |
65 | 5.6 | 2.9 | 3.6 | 1.3 | I. versicolor |
66 | 6.7 | 3.1 | 4.4 | 1.4 | I. versicolor |
67 | 5.6 | 3.0 | 4.5 | 1.5 | I. versicolor |
68 | 5.8 | 2.7 | 4.1 | 1.0 | I. versicolor |
69 | 6.2 | 2.2 | 4.5 | 1.5 | I. versicolor |
70 | 5.6 | 2.5 | 3.9 | 1.1 | I. versicolor |
71 | 5.9 | 3.2 | 4.8 | 1.8 | I. versicolor |
72 | 6.1 | 2.8 | 4.0 | 1.3 | I. versicolor |
73 | 6.3 | 2.5 | 4.9 | 1.5 | I. versicolor |
74 | 6.1 | 2.8 | 4.7 | 1.2 | I. versicolor |
75 | 6.4 | 2.9 | 4.3 | 1.3 | I. versicolor |
76 | 6.6 | 3.0 | 4.4 | 1.4 | I. versicolor |
77 | 6.8 | 2.8 | 4.8 | 1.4 | I. versicolor |
78 | 6.7 | 3.0 | 5.0 | 1.7 | I. versicolor |
79 | 6.0 | 2.9 | 4.5 | 1.5 | I. versicolor |
80 | 5.7 | 2.6 | 3.5 | 1.0 | I. versicolor |
81 | 5.5 | 2.4 | 3.8 | 1.1 | I. versicolor |
82 | 5.5 | 2.4 | 3.7 | 1.0 | I. versicolor |
83 | 5.8 | 2.7 | 3.9 | 1.2 | I. versicolor |
84 | 6.0 | 2.7 | 5.1 | 1.6 | I. versicolor |
85 | 5.4 | 3.0 | 4.5 | 1.5 | I. versicolor |
86 | 6.0 | 3.4 | 4.5 | 1.6 | I. versicolor |
87 | 6.7 | 3.1 | 4.7 | 1.5 | I. versicolor |
88 | 6.3 | 2.3 | 4.4 | 1.3 | I. versicolor |
89 | 5.6 | 3.0 | 4.1 | 1.3 | I. versicolor |
90 | 5.5 | 2.5 | 4.0 | 1.3 | I. versicolor |
91 | 5.5 | 2.6 | 4.4 | 1.2 | I. versicolor |
92 | 6.1 | 3.0 | 4.6 | 1.4 | I. versicolor |
93 | 5.8 | 2.6 | 4.0 | 1.2 | I. versicolor |
94 | 5.0 | 2.3 | 3.3 | 1.0 | I. versicolor |
95 | 5.6 | 2.7 | 4.2 | 1.3 | I. versicolor |
96 | 5.7 | 3.0 | 4.2 | 1.2 | I. versicolor |
97 | 5.7 | 2.9 | 4.2 | 1.3 | I. versicolor |
98 | 6.2 | 2.9 | 4.3 | 1.3 | I. versicolor |
99 | 5.1 | 2.5 | 3.0 | 1.1 | I. versicolor |
100 | 5.7 | 2.8 | 4.1 | 1.3 | I. versicolor |
101 | 6.3 | 3.3 | 6.0 | 2.5 | I. virginica |
102 | 5.8 | 2.7 | 5.1 | 1.9 | I. virginica |
103 | 7.1 | 3.0 | 5.9 | 2.1 | I. virginica |
104 | 6.3 | 2.9 | 5.6 | 1.8 | I. virginica |
105 | 6.5 | 3.0 | 5.8 | 2.2 | I. virginica |
106 | 7.6 | 3.0 | 6.6 | 2.1 | I. virginica |
107 | 4.9 | 2.5 | 4.5 | 1.7 | I. virginica |
108 | 7.3 | 2.9 | 6.3 | 1.8 | I. virginica |
109 | 6.7 | 2.5 | 5.8 | 1.8 | I. virginica |
110 | 7.2 | 3.6 | 6.1 | 2.5 | I. virginica |
111 | 6.5 | 3.2 | 5.1 | 2.0 | I. virginica |
112 | 6.4 | 2.7 | 5.3 | 1.9 | I. virginica |
113 | 6.8 | 3.0 | 5.5 | 2.1 | I. virginica |
114 | 5.7 | 2.5 | 5.0 | 2.0 | I. virginica |
115 | 5.8 | 2.8 | 5.1 | 2.4 | I. virginica |
116 | 6.4 | 3.2 | 5.3 | 2.3 | I. virginica |
117 | 6.5 | 3.0 | 5.5 | 1.8 | I. virginica |
118 | 7.7 | 3.8 | 6.7 | 2.2 | I. virginica |
119 | 7.7 | 2.6 | 6.9 | 2.3 | I. virginica |
120 | 6.0 | 2.2 | 5.0 | 1.5 | I. virginica |
121 | 6.9 | 3.2 | 5.7 | 2.3 | I. virginica |
122 | 5.6 | 2.8 | 4.9 | 2.0 | I. virginica |
123 | 7.7 | 2.8 | 6.7 | 2.0 | I. virginica |
124 | 6.3 | 2.7 | 4.9 | 1.8 | I. virginica |
125 | 6.7 | 3.3 | 5.7 | 2.1 | I. virginica |
126 | 7.2 | 3.2 | 6.0 | 1.8 | I. virginica |
127 | 6.2 | 2.8 | 4.8 | 1.8 | I. virginica |
128 | 6.1 | 3.0 | 4.9 | 1.8 | I. virginica |
129 | 6.4 | 2.8 | 5.6 | 2.1 | I. virginica |
130 | 7.2 | 3.0 | 5.8 | 1.6 | I. virginica |
131 | 7.4 | 2.8 | 6.1 | 1.9 | I. virginica |
132 | 7.9 | 3.8 | 6.4 | 2.0 | I. virginica |
133 | 6.4 | 2.8 | 5.6 | 2.2 | I. virginica |
134 | 6.3 | 2.8 | 5.1 | 1.5 | I. virginica |
135 | 6.1 | 2.6 | 5.6 | 1.4 | I. virginica |
136 | 7.7 | 3.0 | 6.1 | 2.3 | I. virginica |
137 | 6.3 | 3.4 | 5.6 | 2.4 | I. virginica |
138 | 6.4 | 3.1 | 5.5 | 1.8 | I. virginica |
139 | 6.0 | 3.0 | 4.8 | 1.8 | I. virginica |
140 | 6.9 | 3.1 | 5.4 | 2.1 | I. virginica |
141 | 6.7 | 3.1 | 5.6 | 2.4 | I. virginica |
142 | 6.9 | 3.1 | 5.1 | 2.3 | I. virginica |
143 | 5.8 | 2.7 | 5.1 | 1.9 | I. virginica |
144 | 6.8 | 3.2 | 5.9 | 2.3 | I. virginica |
145 | 6.7 | 3.3 | 5.7 | 2.5 | I. virginica |
146 | 6.7 | 3.0 | 5.2 | 2.3 | I. virginica |
147 | 6.3 | 2.5 | 5.0 | 1.9 | I. virginica |
148 | 6.5 | 3.0 | 5.2 | 2.0 | I. virginica |
149 | 6.2 | 3.4 | 5.4 | 2.3 | I. virginica |
150 | 5.9 | 3.0 | 5.1 | 1.8 | I. virginica |
The iris data set is widely used as a beginner's dataset for machine learning purposes. The dataset is included in R base and Python in the machine learning library scikit-learn, so that users can access it without having to find a source for it.
Several versions of the dataset have been published. [8]
The example R code shown below reproduce the scatterplot displayed at the top of this article:
# Show the datasetiris# Show the help page, with information about the dataset?iris# Create scatterplots of all pairwise combination of the 4 variables in the datasetpairs(iris[1:4],main="Iris Data (red=setosa,green=versicolor,blue=virginica)",pch=21,bg=c("red","green3","blue")[unclass(iris$Species)])
fromsklearn.datasetsimportload_irisiris=load_iris()iris
This code gives:
{'data':array([[5.1,3.5,1.4,0.2],[4.9,3.,1.4,0.2],[4.7,3.2,1.3,0.2],[4.6,3.1,1.5,0.2],...'target':array([0,0,0,...1,1,1,...2,2,2,...'target_names':array(['setosa','versicolor','virginica'],dtype='<U10'),...}
A data set is a collection of data. In the case of tabular data, a data set corresponds to one or more database tables, where every column of a table represents a particular variable, and each row corresponds to a given record of the data set in question. The data set lists values for each of the variables, such as for example height and weight of an object, for each member of the data set. Data sets can also consist of a collection of documents or files.
Multivariate statistics is a subdivision of statistics encompassing the simultaneous observation and analysis of more than one outcome variable, i.e., multivariate random variables. Multivariate statistics concerns understanding the different aims and background of each of the different forms of multivariate analysis, and how they relate to each other. The practical application of multivariate statistics to a particular problem may involve several types of univariate and multivariate analyses in order to understand the relationships between variables and their relevance to the problem being studied.
Iris is a flowering plant genus of 310 accepted species with showy flowers. As well as being the scientific name, iris is also widely used as a common name for all Iris species, as well as some belonging to other closely related genera. A common name for some species is flags, while the plants of the subgenus Scorpiris are widely known as junos, particularly in horticulture. It is a popular garden flower.
Principal component analysis (PCA) is a linear dimensionality reduction technique with applications in exploratory data analysis, visualization and data preprocessing.
A self-organizing map (SOM) or self-organizing feature map (SOFM) is an unsupervised machine learning technique used to produce a low-dimensional representation of a higher-dimensional data set while preserving the topological structure of the data. For example, a data set with variables measured in observations could be represented as clusters of observations with similar values for the variables. These clusters then could be visualized as a two-dimensional "map" such that observations in proximal clusters have more similar values than observations in distal clusters. This can make high-dimensional data easier to visualize and analyze.
In machine learning, the perceptron is an algorithm for supervised learning of binary classifiers. A binary classifier is a function which can decide whether or not an input, represented by a vector of numbers, belongs to some specific class. It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector.
Nonlinear dimensionality reduction, also known as manifold learning, is any of various related techniques that aim to project high-dimensional data, potentially existing across non-linear manifolds which cannot be adequately captured by linear decomposition methods, onto lower-dimensional latent manifolds, with the goal of either visualizing the data in the low-dimensional space, or learning the mapping itself. The techniques described below can be understood as generalizations of linear decomposition methods used for dimensionality reduction, such as singular value decomposition and principal component analysis.
Dimensionality reduction, or dimension reduction, is the transformation of data from a high-dimensional space into a low-dimensional space so that the low-dimensional representation retains some meaningful properties of the original data, ideally close to its intrinsic dimension. Working in high-dimensional spaces can be undesirable for many reasons; raw data are often sparse as a consequence of the curse of dimensionality, and analyzing the data is usually computationally intractable. Dimensionality reduction is common in fields that deal with large numbers of observations and/or large numbers of variables, such as signal processing, speech recognition, neuroinformatics, and bioinformatics.
Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification.
When classification is performed by a computer, statistical methods are normally used to develop the algorithm.
Tab-separated values (TSV) is a simple, text-based file format for storing tabular data. Records are separated by newlines, and values within a record are separated by tab characters. The TSV format is thus a delimiter-separated values format, similar to comma-separated values.
Iris versicolor is also commonly known as the blue flag, harlequin blueflag, larger blue flag, northern blue flag, and poison flag, plus other variations of these names, and in Great Britain and Ireland as purple iris.
Optimal Discriminant Analysis (ODA) and the related classification tree analysis (CTA) are exact statistical methods that maximize predictive accuracy. For any specific sample and exploratory or confirmatory hypothesis, optimal discriminant analysis (ODA) identifies the statistical model that yields maximum predictive accuracy, assesses the exact Type I error rate, and evaluates potential cross-generalizability. Optimal discriminant analysis may be applied to > 0 dimensions, with the one-dimensional case being referred to as UniODA and the multidimensional case being referred to as MultiODA. Optimal discriminant analysis is an alternative to ANOVA and regression analysis.
Elastic maps provide a tool for nonlinear dimensionality reduction. By their construction, they are a system of elastic springs embedded in the data space. This system approximates a low-dimensional manifold. The elastic coefficients of this system allow the switch from completely unstructured k-means clustering to the estimators located closely to linear PCA manifolds. With some intermediate values of the elasticity coefficients, this system effectively approximates non-linear principal manifolds. This approach is based on a mechanical analogy between principal manifolds, that are passing through "the middle" of the data distribution, and elastic membranes and plates. The method was developed by A.N. Gorban, A.Y. Zinovyev and A.A. Pitenko in 1996–1998.
Iris virginica, with the common name Virginia blueflag, Virginia iris, great blue flag, or southern blue flag, is a perennial species of flowering plant in the Iridaceae (iris) family, native to central and eastern North America.
mlpy is a Python, open-source, machine learning library built on top of NumPy/SciPy, the GNU Scientific Library and it makes an extensive use of the Cython language. mlpy provides a wide range of state-of-the-art machine learning methods for supervised and unsupervised problems and it is aimed at finding a reasonable compromise among modularity, maintainability, reproducibility, usability and efficiency. mlpy is multiplatform, it works with Python 2 and 3 and it is distributed under GPL3.
Alexander Nikolaevich Gorban is a scientist of Russian origin, working in the United Kingdom. He is a professor at the University of Leicester, and director of its Mathematical Modeling Centre. Gorban has contributed to many areas of fundamental and applied science, including statistical physics, non-equilibrium thermodynamics, machine learning and mathematical biology.
The multi-surface method (MSM) is a form of decision making using the concept of piecewise-linear separability of datasets to categorize data.
Iris setosa, the bristle-pointed iris, is a species of flowering plant in the genus Iris of the family Iridaceae, it belongs the subgenus Limniris and the series Tripetalae. It is a rhizomatous perennial from a wide range across the Arctic sea, including Alaska, Maine, Canada, Russia, northeastern Asia, China, Korea and southwards to Japan. The plant has tall branching stems, mid green leaves and violet, purple-blue, violet-blue, blue, to lavender flowers. There are also plants with pink and white flowers.
The following outline is provided as an overview of and topical guide to machine learning: