SqueezeNet

Last updated
SqueezeNet
Original author(s) Forrest Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, Bill Dally, Kurt Keutzer
Initial release22 February 2016;7 years ago (2016-02-22)
Stable release
v1.1 (June 6, 2016;7 years ago (2016-06-06))
Repository github.com/DeepScale/SqueezeNet
Type Deep neural network
License BSD license

In computer vision, SqueezeNet is the name of a deep neural network for image classification that was released in 2016. SqueezeNet was developed by researchers at DeepScale, University of California, Berkeley, and Stanford University. In designing SqueezeNet, the authors' goal was to create a smaller neural network with fewer parameters while achieving competitive accuracy. [1]

Contents

Framework support for SqueezeNet

SqueezeNet was originally released on February 22, 2016. [2] This original version of SqueezeNet was implemented on top of the Caffe deep learning software framework. Shortly thereafter, the open-source research community ported SqueezeNet to a number of other deep learning frameworks. On February 26, 2016, Eddie Bell released a port of SqueezeNet for the Chainer deep learning framework. [3] On March 2, 2016, Guo Haria released a port of SqueezeNet for the Apache MXNet framework. [4] On June 3, 2016, Tammy Yang released a port of SqueezeNet for the Keras framework. [5] In 2017, companies including Baidu, Xilinx, Imagination Technologies, and Synopsys demonstrated SqueezeNet running on low-power processing platforms such as smartphones, FPGAs, and custom processors. [6] [7] [8] [9]

As of 2018, SqueezeNet ships "natively" as part of the source code of a number of deep learning frameworks such as PyTorch, Apache MXNet, and Apple CoreML. [10] [11] [12] In addition, 3rd party developers have created implementations of SqueezeNet that are compatible with frameworks such as TensorFlow. [13] Below is a summary of frameworks that support SqueezeNet.

FrameworkSqueezeNet SupportReferences
Apache MXNet Native [11]
Apple CoreMLNative [12]
Caffe2Native [14]
Keras 3rd party [5]
MATLAB Deep Learning ToolboxNative [15]
ONNX Native [16]
PyTorch Native [10]
TensorFlow 3rd party [13]
Wolfram Mathematica Native [17]

Relationship to AlexNet

SqueezeNet was originally described in a paper entitled "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size." [18] AlexNet is a deep neural network that has 240 MB of parameters, and SqueezeNet has just 5 MB of parameters. This small model size can more easily fit into computer memory and can more easily be transmitted over a computer network. However, it's important to note that SqueezeNet is not a "squeezed version of AlexNet." Rather, SqueezeNet is an entirely different DNN architecture than AlexNet. [19] What SqueezeNet and AlexNet have in common is that both of them achieve approximately the same level of accuracy when evaluated on the ImageNet image classification validation dataset.

Relationship to Deep Compression

Model compression (e.g. quantization and pruning of model parameters) can be applied to a deep neural network after it has been trained. [20] In the SqueezeNet paper, the authors demonstrated that a model compression technique called Deep Compression can be applied to SqueezeNet to further reduce the size of the parameter file from 5 MB to 500 KB. [18] Deep Compression has also been applied to other DNNs, such as AlexNet and VGG. [21]

Offshoots of SqueezeNet

Some of the members of the original SqueezeNet team have continued to develop resource-efficient deep neural networks for a variety of applications. A few of these works are noted in the following table. As with the original SqueezeNet model, the open-source research community has ported and adapted these newer "squeeze"-family models for compatibility with multiple deep learning frameworks.

DNN ModelApplicationOriginal

Implementation

Other

Implementations

SqueezeDet [22] [23] Object Detection

on Images

TensorFlow [24] Caffe, [25] Keras [26] [27] [28]
SqueezeSeg [29] Semantic

Segmentation

of LIDAR

TensorFlow [30]
SqueezeNext [31] Image

Classification

Caffe [32] TensorFlow, [33] Keras, [34]

PyTorch [35]

SqueezeNAS [36] [37] Neural Architecture Search

for Semantic Segmentation

PyTorch [38]

In addition, the open-source research community has extended SqueezeNet to other applications, including semantic segmentation of images and style transfer. [39] [40] [41]

Related Research Articles

<span class="mw-page-title-main">MNIST database</span> Database of handwritten digits

The MNIST database is a large database of handwritten digits that is commonly used for training various image processing systems. The database is also widely used for training and testing in the field of machine learning. It was created by "re-mixing" the samples from NIST's original datasets. The creators felt that since NIST's training dataset was taken from American Census Bureau employees, while the testing dataset was taken from American high school students, it was not well-suited for machine learning experiments. Furthermore, the black and white images from NIST were normalized to fit into a 28x28 pixel bounding box and anti-aliased, which introduced grayscale levels.

Convolutional neural network (CNN) is a regularized type of feed-forward neural network that learns feature engineering by itself via filters optimization. Vanishing gradients and exploding gradients, seen during backpropagation in earlier neural networks, are prevented by using regularized weights over fewer connections. For example, for each neuron in the fully-connected layer 10,000 weights would be required for processing an image sized 100 × 100 pixels. However, applying cascaded convolution kernels, only 25 neurons are required to process 5x5-sized tiles. Higher-layer features are extracted from wider context windows, compared to lower-layer features.

Eclipse Deeplearning4j is a programming library written in Java for the Java virtual machine (JVM). It is a framework with wide support for deep learning algorithms. Deeplearning4j includes implementations of the restricted Boltzmann machine, deep belief net, deep autoencoder, stacked denoising autoencoder and recursive neural tensor network, word2vec, doc2vec, and GloVe. These algorithms all include distributed parallel versions that integrate with Apache Hadoop and Spark.

<span class="mw-page-title-main">TensorFlow</span> Machine learning software library

TensorFlow is a free and open-source software library for machine learning and artificial intelligence. It can be used across a range of tasks but has a particular focus on training and inference of deep neural networks.

The following table compares notable software frameworks, libraries and computer programs for deep learning.

A neural Turing machine (NTM) is a recurrent neural network model of a Turing machine. The approach was published by Alex Graves et al. in 2014. NTMs combine the fuzzy pattern matching capabilities of neural networks with the algorithmic power of programmable computers.

<span class="mw-page-title-main">Keras</span> Neural network library

Keras is an open-source library that provides a Python interface for artificial neural networks. Keras acts as an interface for the TensorFlow library.

Apache MXNet is an open-source deep learning software framework that trains and deploys deep neural networks. It is scalable, allows fast model training, and supports a flexible programming model and multiple programming languages. The MXNet library is portable and can scale to multiple GPUs and machines. It was co-developed by Carlos Guestrin at the University of Washington.

spaCy Software library

spaCy is an open-source software library for advanced natural language processing, written in the programming languages Python and Cython. The library is published under the MIT license and its main developers are Matthew Honnibal and Ines Montani, the founders of the software company Explosion.

Caffe is a deep learning framework, originally developed at University of California, Berkeley. It is open source, under a BSD license. It is written in C++, with a Python interface.

PyTorch is a machine learning framework based on the Torch library, used for applications such as computer vision and natural language processing, originally developed by Meta AI and now part of the Linux Foundation umbrella. It is free and open-source software released under the modified BSD license. Although the Python interface is more polished and the primary focus of development, PyTorch also has a C++ interface.

Deep image prior is a type of convolutional neural network used to enhance a given image with no prior training data other than the image itself. A neural network is randomly initialized and used as prior to solve inverse problems such as noise reduction, super-resolution, and inpainting. Image statistics are captured by the structure of a convolutional image generator rather than by any previously learned capabilities.

Neural architecture search (NAS) is a technique for automating the design of artificial neural networks (ANN), a widely used model in the field of machine learning. NAS has been used to design networks that are on par or outperform hand-designed architectures. Methods for NAS can be categorized according to the search space, search strategy and performance estimation strategy used:

U-Net is a convolutional neural network that was developed for biomedical image segmentation at the Computer Science Department of the University of Freiburg. The network is based on a fully convolutional neural network whose architecture was modified and extended to work with fewer training images and to yield more precise segmentation. Segmentation of a 512 × 512 image takes less than a second on a modern GPU.

DeepScale, Inc. was an American technology company headquartered in Mountain View, California, that developed perceptual system technologies for automated vehicles. On October 1, 2019, the company was acquired by Tesla, Inc.

PlaidML is a portable tensor compiler. Tensor compilers bridge the gap between the universal mathematical descriptions of deep learning operations, such as convolution, and the platform and chip-specific code needed to perform those operations with good performance. Internally, PlaidML makes use of the Tile eDSL to generate OpenCL, OpenGL, LLVM, or CUDA code. It enables deep learning on devices where the available computing hardware is either not well supported or the available software stack contains only proprietary components. For example, it does not require the usage of CUDA or cuDNN on Nvidia hardware, while achieving comparable performance.

Forrest N. Iandola is an American computer scientist specializing in efficient AI.

<span class="mw-page-title-main">Horovod (machine learning)</span>

Horovod is a free and open-source software framework for distributed deep learning training using TensorFlow, Keras, PyTorch, and Apache MXNet. Horovod is hosted under the Linux Foundation AI. Horovod has the goal of improving the speed, scale, and resource allocation when training a machine learning model.

<span class="mw-page-title-main">Neural Network Intelligence</span> Microsoft open source library

NNI is a free and open-source AutoML toolkit developed by Microsoft. It is used to automate feature engineering, model compression, neural architecture search, and hyper-parameter tuning.

A vision transformer (ViT) is a transformer designed for computer vision. A ViT breaks down an input image into a series of patches, serialises each patch into a vector, and maps it to a smaller dimension with a single matrix multiplication. These vector embeddings are then processed by a transformer encoder as if they were token embeddings.

References

  1. Ganesh, Abhinav. "Deep Learning Reading Group: SqueezeNet". KDnuggets. Retrieved 2018-04-07.
  2. "SqueezeNet". GitHub. 2016-02-22. Retrieved 2018-05-12.
  3. Bell, Eddie (2016-02-26). "An implementation of SqueezeNet in Chainer". GitHub. Retrieved 2018-05-12.
  4. Haria, Guo (2016-03-02). "SqueezeNet for MXNet". GitHub. Retrieved 2018-05-12.
  5. 1 2 Yang, Tammy (2016-06-03). "SqueezeNet Keras Implemenation". GitHub. Retrieved 2018-05-12.
  6. Chirgwin, Richard (2017-09-26). "Baidu puts open source deep learning into smartphones". The Register. Retrieved 2018-04-07.
  7. Bush, Steve (2018-01-25). "Neural network SDK for PowerVR GPUs". Electronics Weekly. Retrieved 2018-04-07.
  8. Yoshida, Junko (2017-03-13). "Xilinx AI Engine Steers New Course". EE Times. Retrieved 2018-05-13.
  9. Boughton, Paul (2017-08-28). "Deep learning computer vision algorithms ported to processor IP". Engineer Live. Retrieved 2018-04-07.
  10. 1 2 "squeezenet.py". GitHub: PyTorch. Retrieved 2018-05-12.
  11. 1 2 "squeezenet.py". GitHub: Apache MXNet. Retrieved 2018-04-07.
  12. 1 2 "CoreML". Apple. Retrieved 2018-04-10.
  13. 1 2 Poster, Domenick. "Tensorflow implementation of SqueezeNet". GitHub. Retrieved 2018-05-12.
  14. Inkawhich, Nathan. "SqueezeNet Model Quickload Tutorial". GitHub: Caffe2. Retrieved 2018-04-07.
  15. "SqueezeNet for MATLAB Deep Learning Toolbox". Mathworks. Retrieved 2018-10-03.
  16. Fang, Lu. "SqueezeNet for ONNX". Open Neural Network eXchange.
  17. "SqueezeNet V1.1 Trained on ImageNet Competition Data". Wolfram Neural Net Repository. Retrieved 2018-05-12.
  18. 1 2 Iandola, Forrest N; Han, Song; Moskewicz, Matthew W; Ashraf, Khalid; Dally, William J; Keutzer, Kurt (2016). "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size". arXiv: 1602.07360 [cs.CV].
  19. "SqueezeNet". Short Science. Retrieved 2018-05-13.
  20. Gude, Alex (2016-08-09). "Lab41 Reading Group: Deep Compression" . Retrieved 2018-05-08.
  21. Han, Song (2016-11-06). "Compressing and regularizing deep neural networks". O'Reilly. Retrieved 2018-05-08.
  22. Wu, Bichen; Wan, Alvin; Iandola, Forrest; Jin, Peter H.; Keutzer, Kurt (2016). "SqueezeDet: Unified, Small, Low Power Fully Convolutional Neural Networks for Real-Time Object Detection for Autonomous Driving". arXiv: 1612.01051 [cs.CV].
  23. Nunes Fernandes, Edgar (2017-03-02). "Introducing SqueezeDet: low power fully convolutional neural network framework for autonomous driving". The Intelligence of Information. Retrieved 2019-03-31.
  24. Wu, Bichen (2016-12-08). "SqueezeDet: Unified, Small, Low Power Fully Convolutional Neural Networks for Real-Time Object Detection for Autonomous Driving". GitHub. Retrieved 2018-12-26.
  25. Kuan, Xu (2017-12-20). "Caffe SqueezeDet". GitHub. Retrieved 2018-12-26.
  26. Padmanabha, Nischal (2017-03-20). "SqueezeDet on Keras". GitHub. Retrieved 2018-12-26.
  27. Ehmann, Christopher (2018-05-29). "Fast object detection with SqueezeDet on Keras". Medium. Retrieved 2019-03-31.
  28. Ehmann, Christopher (2018-05-02). "A deeper look into SqueezeDet on Keras". Medium. Retrieved 2019-03-31.
  29. Wu, Bichen; Wan, Alvin; Yue, Xiangyu; Keutzer, Kurt (2017). "SqueezeSeg: Convolutional Neural Nets with Recurrent CRF for Real-Time Road-Object Segmentation from 3D LiDAR Point Cloud". arXiv: 1710.07368 [cs.CV].
  30. Wu, Bichen (2017-12-06). "SqueezeSeg: Convolutional Neural Nets with Recurrent CRF for Real-Time Road-Object Segmentation from 3D LiDAR Point Cloud". GitHub. Retrieved 2018-12-26.
  31. Gholami, Amir; Kwon, Kiseok; Wu, Bichen; Tai, Zizheng; Yue, Xiangyu; Jin, Peter; Zhao, Sicheng; Keutzer, Kurt (2018). "SqueezeNext: Hardware-Aware Neural Network Design". arXiv: 1803.10615 [cs.CV].
  32. Gholami, Amir (2018-04-18). "SqueezeNext". GitHub. Retrieved 2018-12-29.
  33. Verhulsdonck, Tijmen (2018-07-09). "SqueezeNext Tensorflow: A tensorflow Implementation of SqueezeNext". GitHub. Retrieved 2018-12-29.
  34. Sémery, Oleg (2018-09-24). "SqueezeNext, implemented in Keras". GitHub . Retrieved 2018-12-29.
  35. Lu, Yi (2018-06-21). "SqueezeNext.PyTorch". GitHub. Retrieved 2018-12-29.
  36. Shaw, Albert; Hunter, Daniel; Iandola, Forrest; Sidhu, Sammy (2019). "SqueezeNAS: Fast neural architecture search for faster semantic segmentation". arXiv: 1908.01748 [cs.LG].
  37. Yoshida, Junko (2019-08-25). "Does Your AI Chip Have Its Own DNN?". EE Times. Retrieved 2019-09-12.
  38. Shaw, Albert (2019-08-27). "SqueezeNAS". GitHub. Retrieved 2019-09-12.
  39. Treml, Michael; et al. (2016). "Speeding up Semantic Segmentation for Autonomous Driving". NIPS MLITS Workshop. Retrieved 2019-07-21.
  40. Zeng, Li (2017-03-22). "SqueezeNet Neural Style on PyTorch". GitHub. Retrieved 2019-07-21.
  41. Wu, Bichen; Keutzer, Kurt (2017). "The Impact of SqueezeNet" (PDF). UC Berkeley. Retrieved 2019-07-21.