The CIFAR-10 dataset (Canadian Institute For Advanced Research) is a collection of images that are commonly used to train machine learning and computer vision algorithms. It is one of the most widely used datasets for machine learning research. [1] [2] The CIFAR-10 dataset contains 60,000 32x32 color images in 10 different classes. [3] The 10 different classes represent airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks. There are 6,000 images of each class. [4]
Computer algorithms for recognizing objects in photos often learn by example. CIFAR-10 is a set of images that can be used to teach a computer how to recognize objects. Since the images in CIFAR-10 are low-resolution (32x32), this dataset can allow researchers to quickly try different algorithms to see what works.
CIFAR-10 is a labeled subset of the 80 Million Tiny Images dataset from 2008, published in 2009. When the dataset was created, students were paid to label all of the images. [5]
Various kinds of convolutional neural networks tend to be the best at recognizing the images in CIFAR-10.
This is a table of some of the research papers that claim to have achieved state-of-the-art results on the CIFAR-10 dataset. Not all papers are standardized on the same pre-processing techniques, like image flipping or image shifting. For that reason, it is possible that one paper's claim of state-of-the-art could have a higher error rate than an older state-of-the-art claim but still be valid.
| Paper title | Error rate (%) | Publication date |
|---|---|---|
| Convolutional Deep Belief Networks on CIFAR-10 [6] | 21.1 | August, 2010 |
| Maxout Networks [7] | 9.38 | February 13, 2013 |
| Wide Residual Networks [8] | 4.0 | May 23, 2016 |
| Neural Architecture Search with Reinforcement Learning [9] | 3.65 | November 4, 2016 |
| Fractional Max-Pooling [10] | 3.47 | December 18, 2014 |
| Densely Connected Convolutional Networks [11] | 3.46 | August 24, 2016 |
| Shake-Shake regularization [12] | 2.86 | May 21, 2017 |
| Coupled Ensembles of Neural Networks [13] | 2.68 | September 18, 2017 |
| ShakeDrop regularization [14] | 2.67 | Feb 7, 2018 |
| Improved Regularization of Convolutional Neural Networks with Cutout [15] | 2.56 | Aug 15, 2017 |
| Regularized Evolution for Image Classifier Architecture Search [16] | 2.13 | Feb 6, 2018 |
| Rethinking Recurrent Neural Networks and other Improvements for Image Classification [17] | 1.64 | July 31, 2020 |
| AutoAugment: Learning Augmentation Policies from Data [18] | 1.48 | May 24, 2018 |
| A Survey on Neural Architecture Search [19] | 1.33 | May 4, 2019 |
| GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism [20] | 1.00 | Nov 16, 2018 |
| Reduction of Class Activation Uncertainty with Background Information [21] | 0.95 | May 5, 2023 |
| An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale [22] | 0.5 | 2021 |
CIFAR-10 is also used as a performance benchmark for teams competing to run neural networks faster and cheaper. DAWNBench has benchmark data on their website.
{{cite arXiv}}: CS1 maint: multiple names: authors list (link)