Original author(s) |
|
---|---|
Developer(s) | Meta AI |
Initial release | September 2016 [1] |
Stable release | |
Repository | github |
Written in | |
Operating system | |
Platform | IA-32, x86-64, ARM64 |
Available in | English |
Type | Library for machine learning and deep learning |
License | BSD-3 [3] |
Website | pytorch |
Part of a series on |
Machine learning and data mining |
---|
PyTorch is a machine learning library based on the Torch library, [4] [5] [6] used for applications such as computer vision and natural language processing, [7] originally developed by Meta AI and now part of the Linux Foundation umbrella. [8] [9] [10] [11] It is one of the most popular deep learning frameworks, alongside others such as TensorFlow and PaddlePaddle [12] [13] , offering free and open-source software released under the modified BSD license. Although the Python interface is more polished and the primary focus of development, PyTorch also has a C++ interface. [14]
A number of pieces of deep learning software are built on top of PyTorch, including Tesla Autopilot, [15] Uber's Pyro, [16] Hugging Face's Transformers, [17] PyTorch Lightning, [18] [19] and Catalyst. [20] [21]
PyTorch provides two high-level features: [22]
Meta (formerly known as Facebook) operates both PyTorch and Convolutional Architecture for Fast Feature Embedding (Caffe2), but models defined by the two frameworks were mutually incompatible. The Open Neural Network Exchange (ONNX) project was created by Meta and Microsoft in September 2017 for converting models between frameworks. Caffe2 was merged into PyTorch at the end of March 2018. [23] In September 2022, Meta announced that PyTorch would be governed by the independent PyTorch Foundation, a newly created subsidiary of the Linux Foundation. [24]
PyTorch 2.0 was released on 15 March 2023, introducing TorchDynamo, a Python-level compiler that makes code run up to 2x faster, along with significant improvements in training and inference performance across major cloud platforms. [25] [26]
PyTorch defines a class called Tensor (torch.Tensor
) to store and operate on homogeneous multidimensional rectangular arrays of numbers. PyTorch Tensors are similar to NumPy Arrays, but can also be operated on a CUDA-capable NVIDIA GPU. PyTorch has also been developing support for other GPU platforms, for example, AMD's ROCm [27] and Apple's Metal Framework. [28]
PyTorch supports various sub-types of Tensors. [29]
Note that the term "tensor" here does not carry the same meaning as tensor in mathematics or physics. The meaning of the word in machine learning is only superficially related to its original meaning as a certain kind of object in linear algebra. Tensors in PyTorch are simply multi-dimensional arrays.
PyTorch defines a module called nn (torch.nn
) to describe neural networks and to support training. This module offers a comprehensive collection of building blocks for neural networks, including various layers and activation functions, enabling the construction of complex models. Networks are built by inheriting from the torch.nn
module and defining the sequence of operations in the forward()
function.
The following program shows the low-level functionality of the library with a simple example.
importtorchdtype=torch.floatdevice=torch.device("cpu")# Execute all calculations on the CPU# device = torch.device("cuda:0") # Executes all calculations on the GPU# Create a tensor and fill it with random numbersa=torch.randn(2,3,device=device,dtype=dtype)print(a)# Output: tensor([[-1.1884, 0.8498, -1.7129],# [-0.8816, 0.1944, 0.5847]])b=torch.randn(2,3,device=device,dtype=dtype)print(b)# Output: tensor([[ 0.7178, -0.8453, -1.3403],# [ 1.3262, 1.1512, -1.7070]])print(a*b)# Output: tensor([[-0.8530, -0.7183, 2.58],# [-1.1692, 0.2238, -0.9981]])print(a.sum())# Output: tensor(-2.1540)print(a[1,2])# Output of the element in the third column of the second row (zero based)# Output: tensor(0.5847)print(a.max())# Output: tensor(0.8498)
The following code-block defines a neural network with linear layers using the nn
module.
importtorchfromtorchimportnn# Import the nn sub-module from PyTorchclassNeuralNetwork(nn.Module):# Neural networks are defined as classesdef__init__(self):# Layers and variables are defined in the __init__ methodsuper().__init__()# Must be in every network.self.flatten=nn.Flatten()# Construct a flattening layer.self.linear_relu_stack=nn.Sequential(# Construct a stack of layers.nn.Linear(28*28,512),# Linear Layers have an input and output shapenn.ReLU(),# ReLU is one of many activation functions provided by nnnn.Linear(512,512),nn.ReLU(),nn.Linear(512,10),)defforward(self,x):# This function defines the forward pass.x=self.flatten(x)logits=self.linear_relu_stack(x)returnlogits
In computing, CUDA is a proprietary parallel computing platform and application programming interface (API) that allows software to use certain types of graphics processing units (GPUs) for accelerated general-purpose processing, an approach called general-purpose computing on GPUs (GPGPU). CUDA API and its runtime: The CUDA API is an extension of the C programming language that adds the ability to specify thread-level parallelism in C and also to specify GPU device specific operations. CUDA is a software layer that gives direct access to the GPU's virtual instruction set and parallel computational elements for the execution of compute kernels. In addition to drivers and runtime kernels, the CUDA platform includes compilers, libraries and developer tools to help programmers accelerate their applications.
Torch is an open-source machine learning library, a scientific computing framework, and a scripting language based on Lua. It provides LuaJIT interfaces to deep learning algorithms implemented in C. It was created by the Idiap Research Institute at EPFL. Torch development moved in 2017 to PyTorch, a port of the library to Python.
Eclipse Deeplearning4j is a programming library written in Java for the Java virtual machine (JVM). It is a framework with wide support for deep learning algorithms. Deeplearning4j includes implementations of the restricted Boltzmann machine, deep belief net, deep autoencoder, stacked denoising autoencoder and recursive neural tensor network, word2vec, doc2vec, and GloVe. These algorithms all include distributed parallel versions that integrate with Apache Hadoop and Spark.
TensorFlow is a software library for machine learning and artificial intelligence. It can be used across a range of tasks, but is used mainly for training and inference of neural networks. It is one of the most popular deep learning frameworks, alongside others such as PyTorch and PaddlePaddle. It is free and open-source software released under the Apache License 2.0.
The following tables compare notable software frameworks, libraries, and computer programs for deep learning applications.
An AI accelerator, deep learning processor or neural processing unit (NPU) is a class of specialized hardware accelerator or computer system designed to accelerate artificial intelligence and machine learning applications, including artificial neural networks and computer vision. Typical applications include algorithms for robotics, Internet of Things, and other data-intensive or sensor-driven tasks. They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability. As of 2024, a typical AI integrated circuit chip contains tens of billions of MOSFETs.
Keras is an open-source library that provides a Python interface for artificial neural networks. Keras was first independent software, then integrated into the TensorFlow library, and later supporting more. "Keras 3 is a full rewrite of Keras [and can be used] as a low-level cross-framework language to develop custom components such as layers, models, or metrics that can be used in native workflows in JAX, TensorFlow, or PyTorch — with one codebase." Keras 3 will be the default Keras version for TensorFlow 2.16 onwards, but Keras 2 can still be used.
Apache MXNet is an open-source deep learning software framework that trains and deploys deep neural networks. It aims to be scalable, allows fast model training, and supports a flexible programming model and multiple programming languages. The MXNet library is portable and can scale to multiple GPUs and machines. It was co-developed by Carlos Guestrin at the University of Washington, along with GraphLab.
AMD Instinct is AMD's brand of data center GPUs. It replaced AMD's FirePro S brand in 2016. Compared to the Radeon brand of mainstream consumer/gamer products, the Instinct product line is intended to accelerate deep learning, artificial neural network, and high-performance computing/GPGPU applications.
Chainer is an open source deep learning framework written purely in Python on top of NumPy and CuPy Python libraries. The development is led by Japanese venture company Preferred Networks in partnership with IBM, Intel, Microsoft, and Nvidia.
Caffe is a deep learning framework, originally developed at University of California, Berkeley. It is open source, under a BSD license. It is written in C++, with a Python interface.
The Open Neural Network Exchange (ONNX) [] is an open-source artificial intelligence ecosystem of technology companies and research organizations that establish open standards for representing machine learning algorithms and software tools to promote innovation and collaboration in the AI sector. ONNX is available on GitHub.
ROCm is an Advanced Micro Devices (AMD) software stack for graphics processing unit (GPU) programming. ROCm spans several domains: general-purpose computing on graphics processing units (GPGPU), high performance computing (HPC), heterogeneous computing. It offers several programming models: HIP, OpenMP, and OpenCL.
OpenVINO is an open-source software toolkit for optimizing and deploying deep learning models. It enables programmers to develop scalable and efficient AI solutions with relatively few lines of code. It supports several popular model formats and categories, such as large language models, computer vision, and generative AI.
Owl Scientific Computing is a software system for scientific and engineering computing developed in the Department of Computer Science and Technology, University of Cambridge. The System Research Group (SRG) in the department recognises Owl as one of the representative systems developed in SRG in the 2010s. The source code is licensed under the MIT License and can be accessed from the GitHub repository.
A graph neural network (GNN) belongs to a class of artificial neural networks for processing data that can be represented as graphs.
CuPy is an open source library for GPU-accelerated computing with Python programming language, providing support for multi-dimensional arrays, sparse matrices, and a variety of numerical algorithms implemented on top of them. CuPy shares the same API set as NumPy and SciPy, allowing it to be a drop-in replacement to run NumPy/SciPy code on GPU. CuPy supports Nvidia CUDA GPU platform, and AMD ROCm GPU platform starting in v9.0.
Google JAX is a machine learning framework for transforming numerical functions. It is described as bringing together a modified version of autograd and TensorFlow's XLA. It is designed to follow the structure and workflow of NumPy as closely as possible and works with various existing frameworks such as TensorFlow and PyTorch. The primary functions of JAX are:
In machine learning, the term tensor informally refers to two different concepts for organizing and representing data. Data may be organized in a multidimensional array (M-way array), informally referred to as a "data tensor"; however, in the strict mathematical sense, a tensor is a multilinear mapping over a set of domain vector spaces to a range vector space. Observations, such as images, movies, volumes, sounds, and relationships among words and concepts, stored in an M-way array ("data tensor"), may be analyzed either by artificial neural networks or tensor methods.
FAIR is accustomed to working with PyTorch – a deep learning framework optimized for achieving state of the art results in research, regardless of resource constraints. Unfortunately in the real world, most of us are limited by the computational capabilities of our smartphones and computers.