Theano (software)

Last updated

Theano
Original author(s) Montreal Institute for Learning Algorithms (MILA), University of Montreal
Developer(s) PyMC Development Team
Initial release2007;18 years ago (2007)
Final release
2.31.3 [1] / 2 June 2025;24 days ago (2 June 2025)
Repository
Written in Python, CUDA
Platform Linux, macOS, Windows
Type Machine learning library
License The 3-Clause BSD License
Website pytensor.readthedocs.io/en/latest/   OOjs UI icon edit-ltr-progressive.svg

Theano is a Python library and optimizing compiler for manipulating and evaluating mathematical expressions, especially matrix-valued ones. [2] In Theano, computations are expressed using a NumPy-esque syntax and compiled to run efficiently on either CPU or GPU architectures.

Contents

History

Theano is an open source project [3] primarily developed by the Montreal Institute for Learning Algorithms (MILA) at the Université de Montréal. [4]

The name of the software references the ancient philosopher Theano, long associated with the development of the golden mean.

On 28 September 2017, Pascal Lamblin posted a message from Yoshua Bengio, Head of MILA: major development would cease after the 1.0 release due to competing offerings by strong industrial players. [5] Theano 1.0.0 was then released on 15 November 2017. [6]

On 17 May 2018, Chris Fonnesbeck wrote on behalf of the PyMC development team [7] that the PyMC developers will officially assume control of Theano maintenance once the MILA development team steps down. On 29 January 2021, they started using the name Aesara for their fork of Theano. [8]

On 29 Nov 2022, the PyMC development team announced that the PyMC developers will fork the Aesara project under the name PyTensor. [9]

Sample code

The following code is the original Theano's example. It defines a computational graph with 2 scalars a and b of type double and an operation between them (addition) and then creates a Python function f that does the actual computation. [10]

importtheanofromtheanoimporttensor# Declare two symbolic floating-point scalarsa=tensor.dscalar()b=tensor.dscalar()# Create a simple expressionc=a+b# Convert the expression into a callable object that takes (a, b)# values as input and computes a value for cf=theano.function([a,b],c)# Bind 1.5 to 'a', 2.5 to 'b', and evaluate 'c'assert4.0==f(1.5,2.5)

Examples

Matrix Multiplication (Dot Product)

The following code demonstrates how to perform matrix multiplication using Theano, which is essential for linear algebra operations in many machine learning tasks.

importtheanofromtheanoimporttensor# Declare two symbolic 2D arrays (matrices)A=tensor.dmatrix("A")B=tensor.dmatrix("B")# Define a matrix multiplication (dot product) operationC=tensor.dot(A,B)# Create a function that computes the result of the matrix multiplicationf=theano.function([A,B],C)# Sample matricesA_val=[[1,2],[3,4]]B_val=[[5,6],[7,8]]# Evaluate the matrix multiplicationresult=f(A_val,B_val)print(result)

Gradient Calculation

The following code uses Theano to compute the gradient of a simple operation (like a neuron) with respect to its input. This is useful in training machine learning models (backpropagation).

importtheanofromtheanoimporttensor# Define symbolic variablesx=tensor.dscalar("x")# Input scalary=tensor.dscalar("y")# Weight scalar# Define a simple function (y * x, a simple linear function)z=y*x# Compute the gradient of z with respect to x (partial derivative of z with respect to x)dz_dx=tensor.grad(z,x)# Create a function to compute the value of z and dz/dxf=theano.function([x,y],[z,dz_dx])# Sample valuesx_val=2.0y_val=3.0# Compute z and its gradientresult=f(x_val,y_val)print("z:",result[0])# z = y * x = 3 * 2 = 6print("dz/dx:",result[1])# dz/dx = y = 3

Building a Simple Neural Network

The following code shows how to start building a simple neural network. This is a very basic neural network with one hidden layer.

importtheanofromtheanoimporttensorasTimportnumpyasnp# Define symbolic variables for input and outputX=T.matrix("X")# Input featuresy=T.ivector("y")# Target labels (integer vector)# Define the size of the layersinput_size=2# Number of input featureshidden_size=3# Number of neurons in the hidden layeroutput_size=2# Number of output classes# Initialize weights for input to hidden layer (2x3 matrix) and hidden to output (3x2 matrix)W1=theano.shared(np.random.randn(input_size,hidden_size),name="W1")b1=theano.shared(np.zeros(hidden_size),name="b1")W2=theano.shared(np.random.randn(hidden_size,output_size),name="W2")b2=theano.shared(np.zeros(output_size),name="b2")# Define the forward pass (hidden layer and output layer)hidden_output=T.nnet.sigmoid(T.dot(X,W1)+b1)# Sigmoid activationoutput=T.nnet.softmax(T.dot(hidden_output,W2)+b2)# Softmax output# Define the cost function (cross-entropy)cost=T.nnet.categorical_crossentropy(output,y).mean()# Compute gradientsgrad_W1,grad_b1,grad_W2,grad_b2=T.grad(cost,[W1,b1,W2,b2])# Create a function to compute the cost and gradientstrain=theano.function(inputs=[X,y],outputs=[cost,grad_W1,grad_b1,grad_W2,grad_b2])# Sample input data and labels (2 features, 2 samples)X_val=np.array([[0.1,0.2],[0.3,0.4]])y_val=np.array([0,1])# Train the network for a single step (you would iterate in practice)cost_val,grad_W1_val,grad_b1_val,grad_W2_val,grad_b2_val=train(X_val,y_val)print("Cost:",cost_val)print("Gradients for W1:",grad_W1_val)

Broadcasting in Theano

The following code demonstrates how broadcasting works in Theano. Broadcasting allows operations between arrays of different shapes without needing to explicitly reshape them.

importtheanofromtheanoimporttensorasTimportnumpyasnp# Declare symbolic arraysA=T.dmatrix("A")B=T.dvector("B")# Broadcast B to the shape of A, then add themC=A+B# Broadcasting B to match the shape of A# Create a function to evaluate the operationf=theano.function([A,B],C)# Sample data (A is a 3x2 matrix, B is a 2-element vector)A_val=np.array([[1,2],[3,4],[5,6]])B_val=np.array([10,20])# Evaluate the addition with broadcastingresult=f(A_val,B_val)print(result)

See also

References

  1. "Release 2.31.3". 2 June 2025. Retrieved 19 June 2025.
  2. Bergstra, J.; O. Breuleux; F. Bastien; P. Lamblin; R. Pascanu; G. Desjardins; J. Turian; D. Warde-Farley; Y. Bengio (30 June 2010). "Theano: A CPU and GPU Math Expression Compiler" (PDF). Proceedings of the Python for Scientific Computing Conference (SciPy) 2010.
  3. "Github Repository". GitHub .
  4. "deeplearning.net".
  5. Lamblin, Pascal (28 September 2017). "MILA and the future of Theano". theano-users (Mailing list). Retrieved 28 September 2017.
  6. "Release Notes – Theano 1.0.0 documentation".
  7. Developers, PyMC (1 June 2019). "Theano, TensorFlow and the Future of PyMC". Medium. Retrieved 27 August 2019.
  8. "Theano-2.0.0". GitHub .
  9. Developers, PyMC (20 November 2022). "PyMC forked Aesara to PyTensor". pymc.io. Retrieved 19 July 2023.
  10. "Theano Documentation Release 1.0.0" (PDF). LISA lab, University of Montreal. 21 November 2017. p. 22. Retrieved 31 August 2018.