The MNIST database (Modified National Institute of Standards and Technology database [1] ) is a large database of handwritten digits that is commonly used for training various image processing systems. [2] [3] The database is also widely used for training and testing in the field of machine learning. [4] [5] It was created by "re-mixing" the samples from NIST's original datasets. [6] The creators felt that since NIST's training dataset was taken from American Census Bureau employees, while the testing dataset was taken from American high school students, it was not well-suited for machine learning experiments. [7] Furthermore, the black and white images from NIST were normalized to fit into a 28x28 pixel bounding box and anti-aliased, which introduced grayscale levels. [7]
The MNIST database contains 60,000 training images and 10,000 testing images. [8] Half of the training set and half of the test set were taken from NIST's training dataset, while the other half of the training set and the other half of the test set were taken from NIST's testing dataset. [9] The original creators of the database keep a list of some of the methods tested on it. [7] In their original paper, they use a support-vector machine to get an error rate of 0.8%. [10]
Extended MNIST (EMNIST) is a newer dataset developed and released by NIST to be the (final) successor to MNIST. [11] [12] MNIST included images only of handwritten digits. EMNIST includes all the images from NIST Special Database 19, which is a large database of handwritten uppercase and lower case letters as well as digits. [13] [14] The images in EMNIST were converted into the same 28x28 pixel format, by the same process, as were the MNIST images. Accordingly, tools which work with the older, smaller, MNIST dataset will likely work unmodified with EMNIST.
The set of images in the MNIST database was created in 1994 [15] as a combination of two of NIST's databases: Special Database 1 and Special Database 3. Special Database 1 and Special Database 3 consist of digits written by high school students and employees of the United States Census Bureau, respectively. [7]
The original dataset was a set of 128x128 binary images, processed into 28x28 grayscale images. There were originally 60k samples in both the training set and the testing set, but 50k of the testing set were discarded. Refer to [16] for a detailed history and a reconstruction of the discarded testing set.
Some researchers have achieved "near-human performance" on the MNIST database, using a committee of neural networks; in the same paper, the authors achieve performance double that of humans on other recognition tasks. [17] The highest error rate listed [7] on the original website of the database is 12 percent, which is achieved using a simple linear classifier with no preprocessing. [10]
In 2004, a best-case error rate of 0.42 percent was achieved on the database by researchers using a new classifier called the LIRA, which is a neural classifier with three neuron layers based on Rosenblatt's perceptron principles. [18]
Some researchers have tested artificial intelligence systems using the database put under random distortions. The systems in these cases are usually neural networks and the distortions used tend to be either affine distortions or elastic distortions. [7] Sometimes, these systems can be very successful; one such system achieved an error rate on the database of 0.39 percent. [19]
In 2011, an error rate of 0.27 percent, improving on the previous best result, was reported by researchers using a similar system of neural networks. [20] In 2013, an approach based on regularization of neural networks using DropConnect has been claimed to achieve a 0.21 percent error rate. [21] In 2016, the single convolutional neural network best performance was 0.25 percent error rate. [22] As of August 2018, the best performance of a single convolutional neural network trained on MNIST training data using no data augmentation is 0.25 percent error rate. [22] [23] Also, the Parallel Computing Center (Khmelnytskyi, Ukraine) obtained an ensemble of only 5 convolutional neural networks which performs on MNIST at 0.21 percent error rate. [24] [25] Some images in the testing dataset are barely readable and may prevent reaching test error rates of 0%. [26] In 2018, researchers from Department of System and Information Engineering, University of Virginia announced 0.18% error with simultaneous stacked three kind of neural networks (fully connected, recurrent and convolution neural networks). [27]
This is a table of some of the machine learning methods used on the dataset and their error rates, by type of classifier:
Type | Classifier | Distortion | Preprocessing | Error rate (%) |
---|---|---|---|---|
Linear classifier | Pairwise linear classifier | None | Deskewing | 7.6 [10] |
K-Nearest Neighbors | K-NN with rigid transformations | None | None | 0.96 [28] |
K-Nearest Neighbors | K-NN with non-linear deformation (P2DHMDM) | None | Shiftable edges | 0.52 [29] |
Boosted Stumps | Product of stumps on Haar features | None | Haar features | 0.87 [30] |
Non-linear classifier | 40 PCA + quadratic classifier | None | None | 3.3 [10] |
Random Forest | Fast Unified Random Forests for Survival, Regression, and Classification (RF-SRC) [31] | None | Simple statistical pixel importance | 2.8 [32] |
Support-vector machine (SVM) | Virtual SVM, deg-9 poly, 2-pixel jittered | None | Deskewing | 0.56 [33] |
Neural network | 2-layer 784-800-10 | None | None | 1.6 [34] |
Neural network | 2-layer 784-800-10 | Elastic distortions | None | 0.7 [34] |
Deep neural network (DNN) | 6-layer 784-2500-2000-1500-1000-500-10 | Elastic distortions | None | 0.35 [35] |
Convolutional neural network (CNN) | 6-layer 784-40-80-500-1000-2000-10 | None | Expansion of the training data | 0.31 [36] |
Convolutional neural network | 6-layer 784-50-100-500-1000-10-10 | None | Expansion of the training data | 0.27 [37] |
Convolutional neural network (CNN) | 13-layer 64-128(5x)-256(3x)-512-2048-256-256-10 | None | None | 0.25 [22] |
Convolutional neural network | Committee of 35 CNNs, 1-20-P-40-P-150-10 | Elastic distortions | Width normalizations | 0.23 [17] |
Convolutional neural network | Committee of 5 CNNs, 6-layer 784-50-100-500-1000-10-10 | None | Expansion of the training data | 0.21 [24] [25] |
Random Multimodel Deep Learning (RMDL) | 10 NN-10 RNN - 10 CNN | None | None | 0.18 [27] |
Convolutional neural network | Committee of 20 CNNS with Squeeze-and-Excitation Networks [38] | None | Data augmentation | 0.17 [39] |
Convolutional neural network | Ensemble of 3 CNNs with varying kernel sizes | None | Data augmentation consisting of rotation and translation | 0.09 [40] |
In machine learning, a neural network is a model inspired by the structure and function of biological neural networks in animal brains.
Jürgen Schmidhuber is a German computer scientist noted for his work in the field of artificial intelligence, specifically artificial neural networks. He is a scientific director of the Dalle Molle Institute for Artificial Intelligence Research in Switzerland. He is also director of the Artificial Intelligence Initiative and professor of the Computer Science program in the Computer, Electrical, and Mathematical Sciences and Engineering (CEMSE) division at the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia.
An optical neural network (ONN) is a physical implementation of an artificial neural network with optical components.
Transfer learning (TL) is a technique in machine learning (ML) in which knowledge learned from a task is re-used in order to boost performance on a related task. For example, for image classification, knowledge gained while learning to recognize cars could be applied when trying to recognize trucks. This topic is related to the psychological literature on transfer of learning, although practical ties between the two fields are limited. Reusing/transferring information from previously learned tasks to new tasks has the potential to significantly improve learning efficiency.
There are many types of artificial neural networks (ANN).
Deep learning is the subset of machine learning methods based on artificial neural networks (ANNs) with representation learning. The adjective "deep" refers to the use of multiple layers in the network. Methods used can be either supervised, semi-supervised or unsupervised.
Convolutional neural network (CNN) is a regularized type of feed-forward neural network that learns feature engineering by itself via filters optimization. Vanishing gradients and exploding gradients, seen during backpropagation in earlier neural networks, are prevented by using regularized weights over fewer connections. For example, for each neuron in the fully-connected layer, 10,000 weights would be required for processing an image sized 100 × 100 pixels. However, applying cascaded convolution kernels, only 25 neurons are required to process 5x5-sized tiles. Higher-layer features are extracted from wider context windows, compared to lower-layer features.
Quantum machine learning is the integration of quantum algorithms within machine learning programs.
This page is a timeline of machine learning. Major discoveries, achievements, milestones and other major events in machine learning are included.
The ImageNet project is a large visual database designed for use in visual object recognition software research. More than 14 million images have been hand-annotated by the project to indicate what objects are pictured and in at least one million of the images, bounding boxes are also provided. ImageNet contains more than 20,000 categories, with a typical category, such as "balloon" or "strawberry", consisting of several hundred images. The database of annotations of third-party image URLs is freely available directly from ImageNet, though the actual images are not owned by ImageNet. Since 2010, the ImageNet project runs an annual software contest, the ImageNet Large Scale Visual Recognition Challenge, where software programs compete to correctly classify and detect objects and scenes. The challenge uses a "trimmed" list of one thousand non-overlapping classes.
Data augmentation is a statistical technique which allows maximum likelihood estimation from incomplete data. Data augmentation has important applications in Bayesian analysis, and the technique is widely used in machine learning to reduce overfitting when training machine learning models, achieved by training models on several slightly-modified copies of existing data.
AlexNet is the name of a convolutional neural network (CNN) architecture, designed by Alex Krizhevsky in collaboration with Ilya Sutskever and Geoffrey Hinton, who was Krizhevsky's Ph.D. advisor at the University of Toronto.
A residual neural network is a seminal deep learning model in which the weight layers learn residual functions with reference to the layer inputs. It was developed in 2015 for image recognition and won that year's ImageNet Large Scale Visual Recognition Challenge.
The CIFAR-10 dataset is a collection of images that are commonly used to train machine learning and computer vision algorithms. It is one of the most widely used datasets for machine learning research. The CIFAR-10 dataset contains 60,000 32x32 color images in 10 different classes. The 10 different classes represent airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks. There are 6,000 images of each class.
Neural architecture search (NAS) is a technique for automating the design of artificial neural networks (ANN), a widely used model in the field of machine learning. NAS has been used to design networks that are on par or outperform hand-designed architectures. Methods for NAS can be categorized according to the search space, search strategy and performance estimation strategy used:
Artificial neural networks (ANNs) are models created using machine learning to perform a number of tasks. Their creation was inspired by neural circuitry. While some of the computational implementations ANNs relate to earlier discoveries in mathematics, the first implementation of ANNs was by psychologist Frank Rosenblatt, who developed the perceptron. Little research was conducted on ANNs in the 1970s and 1980s, with the AAAI calling that period an "AI winter".
LeNet is a convolutional neural network structure proposed by LeCun et al. in 1998. In general, LeNet refers to LeNet-5 and is a simple convolutional neural network. Convolutional neural networks are a kind of feed-forward neural network whose artificial neurons can respond to a part of the surrounding cells in the coverage range and perform well in large-scale image processing.
Isabelle Guyon is a French-born researcher in machine learning known for her work on support-vector machines, artificial neural networks and bioinformatics. She is a Chair Professor at the University of Paris-Saclay.
The Fashion MNIST dataset is a large freely available database of fashion images that is commonly used for training and testing various machine learning systems. Fashion-MNIST was intended to serve as a replacement for the original MNIST database for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits.