Original author(s) | Ronan Collobert, Samy Bengio, Johnny Mariéthoz [1] |
---|---|
Initial release | October 2002 [1] |
Final release | 7.0 / February 27, 2017 [2] |
Repository | |
Written in | Lua, C, C++ |
Operating system | Linux, Android, Mac OS X, iOS |
Type | Library for machine learning and deep learning |
License | BSD License |
Website | torch |
Torch is an open-source machine learning library, a scientific computing framework, and a scripting language based on Lua. [3] It provides LuaJIT interfaces to deep learning algorithms implemented in C. It was created by the Idiap Research Institute at EPFL. Torch development moved in 2017 to PyTorch, a port of the library to Python. [4] [5] [6]
The core package of Torch is torch
. It provides a flexible N-dimensional array or Tensor, which supports basic routines for indexing, slicing, transposing, type-casting, resizing, sharing storage and cloning. This object is used by most other packages and thus forms the core object of the library. The Tensor also supports mathematical operations like max
, min
, sum
, statistical distributions like uniform, normal and multinomial, and BLAS operations like dot product, matrix–vector multiplication, matrix–matrix multiplication and matrix product.
The following exemplifies using torch via its REPL interpreter:
>a=torch.randn(3,4)>=a-0.2381-0.3401-1.7844-0.26150.14111.62490.17080.8299-1.04342.22911.05250.8465[torch.DoubleTensorofdimension3x4]>a[1][2]-0.34010116549482>a:narrow(1,1,2)-0.2381-0.3401-1.7844-0.26150.14111.62490.17080.8299[torch.DoubleTensorofdimension2x4]>a:index(1,torch.LongTensor{1,2})-0.2381-0.3401-1.7844-0.26150.14111.62490.17080.8299[torch.DoubleTensorofdimension2x4]>a:min()-1.7844365427828
The torch
package also simplifies object-oriented programming and serialization by providing various convenience functions which are used throughout its packages. The torch.class(classname, parentclass)
function can be used to create object factories (classes). When the constructor is called, torch initializes and sets a Lua table with the user-defined metatable, which makes the table an object.
Objects created with the torch factory can also be serialized, as long as they do not contain references to objects that cannot be serialized, such as Lua coroutines, and Lua userdata. However, userdata can be serialized if it is wrapped by a table (or metatable) that provides read()
and write()
methods.
The nn
package is used for building neural networks. It is divided into modular objects that share a common Module
interface. Modules have a forward()
and backward()
method that allow them to feedforward and backpropagate, respectively. Modules can be joined using module composites, like Sequential
, Parallel
and Concat
to create complex task-tailored graphs. Simpler modules like Linear
, Tanh
and Max
make up the basic component modules. This modular interface provides first-order automatic gradient differentiation. What follows is an example use-case for building a multilayer perceptron using Modules:
>mlp=nn.Sequential()>mlp:add(nn.Linear(10,25))-- 10 input, 25 hidden units>mlp:add(nn.Tanh())-- some hyperbolic tangent transfer function>mlp:add(nn.Linear(25,1))-- 1 output>=mlp:forward(torch.randn(10))-0.1815[torch.Tensorofdimension1]
Loss functions are implemented as sub-classes of Criterion
, which has a similar interface to Module
. It also has forward()
and backward()
methods for computing the loss and backpropagating gradients, respectively. Criteria are helpful to train neural network on classical tasks. Common criteria are the mean squared error criterion implemented in MSECriterion
and the cross-entropy criterion implemented in ClassNLLCriterion
. What follows is an example of a Lua function that can be iteratively called to train an mlp
Module on input Tensor x
, target Tensor y
with a scalar learningRate
:
functiongradUpdate(mlp,x,y,learningRate)localcriterion=nn.ClassNLLCriterion()localpred=mlp:forward(x)localerr=criterion:forward(pred,y);mlp:zeroGradParameters();localt=criterion:backward(pred,y);mlp:backward(x,t);mlp:updateParameters(learningRate);end
It also has StochasticGradient
class for training a neural network using stochastic gradient descent, although the optim
package provides much more options in this respect, like momentum and weight decay regularization.
Many packages other than the above official packages are used with Torch. These are listed in the torch cheatsheet. [7] These extra packages provide a wide range of utilities such as parallelism, asynchronous input/output, image processing, and so on. They can be installed with LuaRocks, the Lua package manager which is also included with the Torch distribution.
Torch is used by the Facebook AI Research Group, [8] IBM, [9] Yandex [10] and the Idiap Research Institute. [11] Torch has been extended for use on Android [12] [ better source needed ] and iOS. [13] [ better source needed ] It has been used to build hardware implementations for data flows like those found in neural networks. [14]
Facebook has released a set of extension modules as open source software. [15]
OpenCV is a library of programming functions mainly for real-time computer vision. Originally developed by Intel, it was later supported by Willow Garage, then Itseez. The library is cross-platform and licensed as free and open-source software under Apache License 2. Starting in 2011, OpenCV features GPU acceleration for real-time operations.
Tensor software is a class of mathematical software designed for manipulation and calculation with tensors.
Probabilistic programming (PP) is a programming paradigm in which probabilistic models are specified and inference for these models is performed automatically. It represents an attempt to unify probabilistic modeling and traditional general purpose programming in order to make the former easier and more widely applicable. It can be used to create systems that help make decisions in the face of uncertainty.
scikit-learn is a free and open-source machine learning library for the Python programming language. It features various classification, regression and clustering algorithms including support-vector machines, random forests, gradient boosting, k-means and DBSCAN, and is designed to interoperate with the Python numerical and scientific libraries NumPy and SciPy. Scikit-learn is a NumFOCUS fiscally sponsored project.
Theano is a Python library and optimizing compiler for manipulating and evaluating mathematical expressions, especially matrix-valued ones. In Theano, computations are expressed using a NumPy-esque syntax and compiled to run efficiently on either CPU or GPU architectures.
LuaRocks is a package manager for the Lua programming language that provides a standard format for distributing Lua modules, a tool designed to easily manage the installation of rocks, and a server for distributing them. While not included with the Lua distribution, it has been called the "de facto package manager for community-contributed Lua modules".
TensorFlow is a software library for machine learning and artificial intelligence. It can be used across a range of tasks, but is used mainly for training and inference of neural networks. It is one of the most popular deep learning frameworks, alongside others such as PyTorch and PaddlePaddle. It is free and open-source software released under the Apache License 2.0.
The following tables compare notable software frameworks, libraries, and computer programs for deep learning applications.
Keras is an open-source library that provides a Python interface for artificial neural networks. Keras was first independent software, then integrated into the TensorFlow library, and later supporting more. "Keras 3 is a full rewrite of Keras [and can be used] as a low-level cross-framework language to develop custom components such as layers, models, or metrics that can be used in native workflows in JAX, TensorFlow, or PyTorch — with one codebase." Keras 3 will be the default Keras version for TensorFlow 2.16 onwards, but Keras 2 can still be used.
spaCy is an open-source software library for advanced natural language processing, written in the programming languages Python and Cython. The library is published under the MIT license and its main developers are Matthew Honnibal and Ines Montani, the founders of the software company Explosion.
Caffe is a deep learning framework, originally developed at University of California, Berkeley. It is open source, under a BSD license. It is written in C++, with a Python interface.
PyTorch is a machine learning library based on the Torch library, used for applications such as computer vision and natural language processing, originally developed by Meta AI and now part of the Linux Foundation umbrella. It is one of the most popular deep learning frameworks, alongside others such as TensorFlow and PaddlePaddle, offering free and open-source software released under the modified BSD license. Although the Python interface is more polished and the primary focus of development, PyTorch also has a C++ interface.
ROCm is an Advanced Micro Devices (AMD) software stack for graphics processing unit (GPU) programming. ROCm spans several domains: general-purpose computing on graphics processing units (GPGPU), high performance computing (HPC), heterogeneous computing. It offers several programming models: HIP, OpenMP, and OpenCL.
SqueezeNet is a deep neural network for image classification released in 2016. SqueezeNet was developed by researchers at DeepScale, University of California, Berkeley, and Stanford University. In designing SqueezeNet, the authors' goal was to create a smaller neural network with fewer parameters while achieving competitive accuracy. Their best-performing model achieved the same accuracy as AlexNet on ImageNet classification, but has a size 510x less than it.
Kubeflow is an open-source platform for machine learning and MLOps on Kubernetes introduced by Google. The different stages in a typical machine learning lifecycle are represented with different software components in Kubeflow, including model development (Kubeflow Notebooks), model training (Kubeflow Pipelines,Kubeflow Training Operator), model serving (KServe), and automated machine learning (Katib).
NNI is a free and open-source AutoML toolkit developed by Microsoft. It is used to automate feature engineering, model compression, neural architecture search, and hyper-parameter tuning.
LightGBM, short for Light Gradient-Boosting Machine, is a free and open-source distributed gradient-boosting framework for machine learning, originally developed by Microsoft. It is based on decision tree algorithms and used for ranking, classification and other machine learning tasks. The development focus is on performance and scalability.
Owl Scientific Computing is a software system for scientific and engineering computing developed in the Department of Computer Science and Technology, University of Cambridge. The System Research Group (SRG) in the department recognises Owl as one of the representative systems developed in SRG in the 2010s. The source code is licensed under the MIT License and can be accessed from the GitHub repository.
CuPy is an open source library for GPU-accelerated computing with Python programming language, providing support for multi-dimensional arrays, sparse matrices, and a variety of numerical algorithms implemented on top of them. CuPy shares the same API set as NumPy and SciPy, allowing it to be a drop-in replacement to run NumPy/SciPy code on GPU. CuPy supports Nvidia CUDA GPU platform, and AMD ROCm GPU platform starting in v9.0.
The Latent Diffusion Model (LDM) is a diffusion model architecture developed by the CompVis group at LMU Munich.