Chainer

Last updated
Chainer
Original author(s) Seiya Tokui
Developer(s) Community, Preferred Networks, Inc.
Initial releaseJune 9, 2015;7 years ago (2015-06-09). [1] [2]
Stable release
7.7.0 [3] / 30 July 2020;2 years ago (30 July 2020)
Repository
Written in Python
Platform cross-platform
Available inPython
Type Deep learning library
License MIT
Website chainer.org

Chainer is an open source deep learning framework written purely in Python on top of NumPy and CuPy Python libraries. The development is led by Japanese venture company Preferred Networks in partnership with IBM, Intel, Microsoft, and Nvidia. [4] [5] [6] [7]

Contents

Chainer is notable for its early adoption of "define-by-run" scheme, as well as its performance on large scale systems. [1] The first version was released in June 2015 and has gained large popularity in Japan since then. [1] [2] Furthermore, in 2017, it was listed by KDnuggets in top 10 open source machine learning Python projects. [8]

In December 2019, Preferred Networks announced the transition of its development effort from Chainer to PyTorch and it will only provide maintenance patches after releasing v7. [9]

Define-by-run

Chainer was the first deep learning framework to introduce the define-by-run approach. [10] [11] The traditional procedure to train a network was in two phases: define the fixed connections between mathematical operations (such as matrix multiplication and nonlinear activations) in the network, and then run the actual training calculation. This is called the define-and-run or static-graph approach. Theano and TensorFlow are among the notable frameworks that took this approach. In contrast, in the define-by-run or dynamic-graph approach, the connection in a network is not determined when the training is started. The network is determined during the training as the actual calculation is performed.

One of the advantages of this approach is that it is intuitive and flexible. [12] If the network has complicated control flows such as conditionals and loops, in the define-and-run approach, specially designed operations for such constructs are needed. On the other hand, in the define-by-run approach, programming language's native constructs such as if statements and for loops can be used to describe such flow. This flexibility is especially useful to implement recurrent neural networks. [13] [14]

Another advantage is ease of debugging. [12] In the define-and-run approach, if an error (such as numeric error) has occurred in the training calculation, it is often difficult to inspect the fault, because the code written to define the network and the actual place of the error are separated. In the define-by-run approach, you can just suspend the calculation with the language's built-in debugger and inspect the data that flows on your code of the network.

Define-by-run has gained popularity since the introduction by Chainer and is now implemented in many other frameworks, including PyTorch [15] and TensorFlow. [12]

Extension libraries

Chainer has four extension libraries, ChainerMN, ChainerRL, ChainerCV and ChainerUI. ChainerMN enables Chainer to be used on multiple GPUs with performance significantly faster than other deep learning frameworks. [1] A supercomputer running Chainer on 1024 GPUs processed 90 epochs of ImageNet dataset on ResNet-50 network in 15 minutes, which is four times faster than the previous record held by Facebook. [16] [17] ChainerRL adds state of art deep reinforcement learning algorithms, and ChainerUI is a management and visualization tool.

Applications

Chainer is used as the framework for PaintsChainer, a service which does automatic colorization of black and white, line only, draft drawings with minimal user input. [18] [19]

See also

Related Research Articles

Theano is a Python library and optimizing compiler for manipulating and evaluating mathematical expressions, especially matrix-valued ones. In Theano, computations are expressed using a NumPy-esque syntax and compiled to run efficiently on either CPU or GPU architectures.

<span class="mw-page-title-main">Torch (machine learning)</span>

Torch is an open-source machine learning library, a scientific computing framework, and a script language based on the Lua programming language. It provides a wide range of algorithms for deep learning, and uses the scripting language LuaJIT, and an underlying C implementation. It was created at IDIAP at EPFL. As of 2018, Torch is no longer in active development. However PyTorch, which is based on the Torch library, is actively developed as of August 2022.

<span class="mw-page-title-main">Deeplearning4j</span> Open-source deep learning library

Eclipse Deeplearning4j is a programming library written in Java for the Java virtual machine (JVM). It is a framework with wide support for deep learning algorithms. Deeplearning4j includes implementations of the restricted Boltzmann machine, deep belief net, deep autoencoder, stacked denoising autoencoder and recursive neural tensor network, word2vec, doc2vec, and GloVe. These algorithms all include distributed parallel versions that integrate with Apache Hadoop and Spark.

<span class="mw-page-title-main">TensorFlow</span> Machine learning software library

TensorFlow is a free and open-source software library for machine learning and artificial intelligence. It can be used across a range of tasks but has a particular focus on training and inference of deep neural networks.

The following table compares notable software frameworks, libraries and computer programs for deep learning.

An AI accelerator is a class of specialized hardware accelerator or computer system designed to accelerate artificial intelligence and machine learning applications, including artificial neural networks and machine vision. Typical applications include algorithms for robotics, internet of things, and other data-intensive or sensor-driven tasks. They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability. As of 2018, a typical AI integrated circuit chip contains billions of MOSFET transistors. A number of vendor-specific terms exist for devices in this category, and it is an emerging technology without a dominant design.

Apache MXNet is an open-source deep learning software framework, used to train and deploy deep neural networks. It is scalable, allowing for fast model training and supports a flexible programming model and multiple programming languages. The MXNet library is portable and can scale to multiple GPUs as well as multiple machines. It was co-developed by Carlos Guestrin at University of Washington.

spaCy

spaCy is an open-source software library for advanced natural language processing, written in the programming languages Python and Cython. The library is published under the MIT license and its main developers are Matthew Honnibal and Ines Montani, the founders of the software company Explosion.

<span class="mw-page-title-main">Caffe (software)</span> Deep learning framework

Caffe is a deep learning framework, originally developed at University of California, Berkeley. It is open source, under a BSD license. It is written in C++, with a Python interface.

<span class="mw-page-title-main">PyTorch</span> Open source machine learning library

PyTorch is an open source machine learning framework based on the Torch library, used for applications such as computer vision and natural language processing, primarily developed by Meta AI. It is free and open-source software released under the Modified BSD license. Although the Python interface is more polished and the primary focus of development, PyTorch also has a C++ interface.

In machine learning, hyperparameter optimization or tuning is the problem of choosing a set of optimal hyperparameters for a learning algorithm. A hyperparameter is a parameter whose value is used to control the learning process. By contrast, the values of other parameters are learned.

The Open Neural Network Exchange (ONNX) [] is an open-source artificial intelligence ecosystem of technology companies and research organizations that establish open standards for representing machine learning algorithms and software tools to promote innovation and collaboration in the AI sector. ONNX is available on GitHub.

SqueezeNet is the name of a deep neural network for computer vision that was released in 2016. SqueezeNet was developed by researchers at DeepScale, University of California, Berkeley, and Stanford University. In designing SqueezeNet, the authors' goal was to create a smaller neural network with fewer parameters that can more easily fit into computer memory and can more easily be transmitted over a computer network.

Differentiable programming is a programming paradigm in which a numeric computer program can be differentiated throughout via automatic differentiation. This allows for gradient-based optimization of parameters in the program, often via gradient descent. Differentiable programming has found use in a wide variety of areas, particularly scientific computing and artificial intelligence.

OpenVINO toolkit is a free toolkit facilitating the optimization of a deep learning model from a framework and deployment using an inference engine onto Intel hardware. The toolkit has two versions: OpenVINO toolkit, which is supported by open source community and Intel Distribution of OpenVINO toolkit, which is supported by Intel. OpenVINO was developed by Intel. The toolkit is cross-platform and free for use under Apache License version 2.0. The toolkit enables a write-once, deploy-anywhere approach to deep learning deployments on Intel platforms, including CPU, integrated GPU, Intel Movidius VPU, and FPGAs.

fast.ai is a non-profit research group focused on deep learning and artificial intelligence. It was founded in 2016 by Jeremy Howard and Rachel Thomas with the goal of democratising deep learning. They do this by providing a massive open online course (MOOC) named "Practical Deep Learning for Coders," which has no other prerequisites except for knowledge of the programming language Python.

<span class="mw-page-title-main">Neural Network Intelligence</span> Microsoft open source library

NNI is a free and open source AutoML toolkit developed by Microsoft. It is used to automate feature engineering, model compression, neural architecture search, and hyper-parameter tuning.

<span class="mw-page-title-main">Owl Scientific Computing</span> Numerical programming library for the OCaml programming language

Owl Scientific Computing is a software system for scientific and engineering computing developed in the Department of Computer Science and Technology, University of Cambridge. The System Research Group (SRG) in the department recognises Owl as one of the representative systems developed in SRG in the 2010s. The source code is licensed under the MIT License and can be accessed from the GitHub repository.

CuPy is an open source library for GPU-accelerated computing with Python programming language, providing support for multi-dimensional arrays, sparse matrices, and a variety of numerical algorithms implemented on top of them. CuPy shares the same API set as NumPy and SciPy, allowing it to be a drop-in replacement to run NumPy/SciPy code on GPU. CuPy supports NVIDIA CUDA GPU platform, and AMD ROCm GPU platform starting in v9.0.

<span class="mw-page-title-main">Google JAX</span> Machine Learning framework designed for parallelization and autograd.

Google JAX is a machine learning framework for transforming numerical functions. It is described as bringing together a modified version of autograd and TensorFlow's XLA. It is designed to follow the structure and workflow of NumPy as closely as possible and works with various existing frameworks such as TensorFlow and PyTorch. The primary functions of JAX are:

  1. grad: automatic differentiation
  2. jit: compilation
  3. vmap: auto-vectorization
  4. pmap: SPMD programming

References

  1. 1 2 3 4 "Big-in-Japan AI code 'Chainer' shows how Intel will gun for GPUs". The Register. 2017-04-07. Retrieved 2017-12-24.
  2. 1 2 "Deep Learning のフレームワーク Chainer を公開しました" (in Japanese). 2015-06-09. Retrieved 2017-12-24.
  3. "Release 7.7.0". 30 July 2020. Retrieved 31 July 2020.
  4. "Chainer Homepage" . Retrieved 2017-12-24.
  5. "IBM Wants to be "Red Hat" of Deep Learning". HPCwire. 2017-01-26. Retrieved 2017-09-08.
  6. "Intel Collaborating with Preferred Networks in Japan on Deep Learning". 2017-04-06. Retrieved 2017-12-24.
  7. "Microsoft partners with Preferred Networks to bring Chainer deep learning technology to Azure - MSPoweruser". MSPoweruser. 2017-05-23. Retrieved 2017-09-08.
  8. "Top 20 Python Machine Learning Open Source Projects". KDnuggets. 2017-11-24.
  9. "Preferred Networks Migrates its Deep Learning Research Platform to PyTorch". Preferred Networks, Inc. 2019-12-05. Retrieved 2019-12-27.
  10. Tokui, Seiya; et al. (2015). "Chainer: a next-generation open source framework for deep learning". 29th Annual Conference on Neural Information Processing Systems (NIPS). 5.
  11. Shimada, Naoki (September 14, 2017). Deep Learning with Chainer. Gijutsu-Hyohron. p. 61. ISBN   4774191868.
  12. 1 2 3 "Eager Execution: An imperative, define-by-run interface to TensorFlow". Google Research Blog.
  13. "Deep Learning With Dynamic Computation Graphs (ICLR 2017)". Metadata.
  14. Hido, Shohei (8 November 2016). "Complex neural networks made easy by Chainer". O'Reilly Media. Retrieved 26 June 2018.
  15. Perez, Carlos E. (20 January 2017). "PyTorch, Dynamic Computational Graphs and Modular Deep Learning". Medium.
  16. "Extremely Large Minibatch SGD: Training ResNet-50 on ImageNet in 15 Minutes" (pdf). Retrieved 2017-12-24.
  17. Greene, Tristan (20 November 2017). "Facebook's nerds bested by Japan's in the race to train AI". The Next Web. Retrieved 24 November 2017.
  18. Know, Now You (2017-02-15). "This neural network-based software will add colour to your drawings for free". Techly. Retrieved 2017-09-08.
  19. "Drawing app "pixiv Sketch" and automatic coloring service "PaintsChainer" collaborate to provide a new function for automatic coloring of illustrations!". 2017-05-24. Retrieved 2017-12-24.