Yorick (programming language)

Last updated
Yorick
Designed by David H. Munro
First appeared1996;28 years ago (1996)
Stable release
2.2.04 / May 2015;9 years ago (2015-05)
OS Unix-like systems including macOS, Microsoft Windows
License BSD
Filename extensions .i
Website github.com/LLNL/yorick

Yorick is an interpreted programming language designed for numerics, graph plotting, and steering large scientific simulation codes. It is quite fast due to array syntax, and extensible via C or Fortran routines. It was created in 1996 by David H. Munro of Lawrence Livermore National Laboratory.

Contents

Features

Indexing

Yorick is good at manipulating elements in N-dimensional arrays conveniently with its powerful syntax.

Several elements can be accessed all at once:

> x=[1,2,3,4,5,6];> x[1,2,3,4,5,6]> x(3:6)[3,4,5,6]> x(3:6:2)[3,5]> x(6:3:-2)[6,4]
Arbitrary elements
> x=[[1,2,3],[4,5,6]]> x[[1,2,3],[4,5,6]]> x([2,1],[1,2])[[2,1],[5,4]]> list=where(1<x)> list[2,3,4,5,6]> y=x(list)> y[2,3,4,5,6]
Pseudo-index

Like "theading" in PDL and "broadcasting" in Numpy, Yorick has a mechanism to do this:

> x=[1,2,3]> x[1,2,3]> y=[[1,2,3],[4,5,6]]> y[[1,2,3],[4,5,6]]> y(-,)[[[1],[2],[3]],[[4],[5],[6]]]> x(-,)[[1],[2],[3]]> x(,-)[[1,2,3]]> x(,-)/y[[1,1,1],[0,0,0]]> y=[[1.,2,3],[4,5,6]]> x(,-)/y[[1,1,1],[0.25,0.4,0.5]]
Rubber index

".." is a rubber-index to represent zero or more dimensions of the array.

> x=[[1,2,3],[4,5,6]]> x[[1,2,3],[4,5,6]]> x(..,1)[1,2,3]> x(1,..)[1,4]> x(2,..,2)5

"*" is a kind of rubber-index to reshape a slice(sub-array) of array to a vector.

> x(*)[1,2,3,4,5,6]
Tensor multiplication

Tensor multiplication is done as follows in Yorick:

P(,+, )*Q(, +)

means

> x=[[1,2,3],[4,5,6]]> x[[1,2,3],[4,5,6]]> y=[[7,8],[9,10],[11,12]]> x(,+)*y(+,)[[39,54,69],[49,68,87],[59,82,105]]> x(+,)*y(,+)[[58,139],[64,154]]

Related Research Articles

<span class="mw-page-title-main">Tensor</span> Algebraic object with geometric applications

In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects related to a vector space. Tensors may map between different objects such as vectors, scalars, and even other tensors. There are many types of tensors, including scalars and vectors, dual vectors, multilinear maps between vector spaces, and even some operations such as the dot product. Tensors are defined independent of any basis, although they are often referred to by their components in a basis related to a particular coordinate system; those components form an array, which can be thought of as a high-dimensional matrix.

In mathematics, the tensor product of two vector spaces V and W is a vector space to which is associated a bilinear map that maps a pair to an element of denoted .

A mathematical symbol is a figure or a combination of figures that is used to represent a mathematical object, an action on mathematical objects, a relation between mathematical objects, or for structuring the other symbols that occur in a formula. As formulas are entirely constituted with symbols of various types, many symbols are needed for expressing all mathematics.

<span class="mw-page-title-main">Matrix multiplication</span> Mathematical operation in linear algebra

In mathematics, specifically in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.

In mathematics and computer programming, index notation is used to specify the elements of an array of numbers. The formalism of how indices are used varies according to the subject. In particular, there are different methods for referring to the elements of a list, a vector, or a matrix, depending on whether one is writing a formal mathematical paper for publication, or when one is writing a computer program.

In mathematics and theoretical physics, a superalgebra is a Z2-graded algebra. That is, it is an algebra over a commutative ring or field with a decomposition into "even" and "odd" pieces and a multiplication operator that respects the grading.

In computer programming, array slicing is an operation that extracts a subset of elements from an array and packages them as another array, possibly in a different dimension from the original.

In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a specialization of the tensor product from vectors to matrices and gives the matrix of the tensor product linear map with respect to a standard choice of basis. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. The Kronecker product is also sometimes called matrix direct product.

In computer science, array programming refers to solutions that allow the application of operations to an entire set of values at once. Such solutions are commonly used in scientific and engineering settings.

In abstract algebra, the biquaternions are the numbers w + xi + yj + zk, where w, x, y, and z are complex numbers, or variants thereof, and the elements of {1, i, j, k} multiply as in the quaternion group and commute with their coefficients. There are three types of biquaternions corresponding to complex numbers and the variations thereof:

In the field of mathematics, norms are defined for elements within a vector space. Specifically, when the vector space comprises matrices, such norms are referred to as matrix norms. Matrix norms differ from vector norms in that they must also interact with matrix multiplication.

<span class="mw-page-title-main">Row- and column-major order</span> Array representation in computer memory

In computing, row-major order and column-major order are methods for storing multidimensional arrays in linear storage such as random access memory.

In mathematics, the tensor product of modules is a construction that allows arguments about bilinear maps to be carried out in terms of linear maps. The module construction is analogous to the construction of the tensor product of vector spaces, but can be carried out for a pair of modules over a commutative ring resulting in a third module, and also for a pair of a right-module and a left-module over any ring, with result an abelian group. Tensor products are important in areas of abstract algebra, homological algebra, algebraic topology, algebraic geometry, operator algebras and noncommutative geometry. The universal property of the tensor product of vector spaces extends to more general situations in abstract algebra. The tensor product of an algebra and a module can be used for extension of scalars. For a commutative ring, the tensor product of modules can be iterated to form the tensor algebra of a module, allowing one to define multiplication in the module in a universal way.

A logical matrix, binary matrix, relation matrix, Boolean matrix, or (0, 1)-matrix is a matrix with entries from the Boolean domain B = {0, 1}. Such a matrix can be used to represent a binary relation between a pair of finite sets. It is an important tool in combinatorial mathematics and theoretical computer science.

In mathematics and mathematical physics, raising and lowering indices are operations on tensors which change their type. Raising and lowering indices are a form of index manipulation in tensor expressions.

This comparison of programming languages (array) compares the features of array data structures or matrix processing for various computer programming languages.

<span class="mw-page-title-main">Matrix (mathematics)</span> Array of numbers

In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, with elements or entries arranged in rows and columns, which is used to represent a mathematical object or property of such an object.

In mathematics, the Robinson–Schensted–Knuth correspondence, also referred to as the RSK correspondence or RSK algorithm, is a combinatorial bijection between matrices A with non-negative integer entries and pairs (P,Q) of semistandard Young tableaux of equal shape, whose size equals the sum of the entries of A. More precisely the weight of P is given by the column sums of A, and the weight of Q by its row sums. It is a generalization of the Robinson–Schensted correspondence, in the sense that taking A to be a permutation matrix, the pair (P,Q) will be the pair of standard tableaux associated to the permutation under the Robinson–Schensted correspondence.

In computer science, array is a data type that represents a collection of elements, each selected by one or more indices that can be computed at run time during program execution. Such a collection is usually called an array variable or array value. By analogy with the mathematical concepts vector and matrix, array types with one and two indices are often called vector type and matrix type, respectively. More generally, a multidimensional array type can be called a tensor type, by analogy with the physical concept, tensor.

<span class="mw-page-title-main">Hadamard product (matrices)</span> Elementwise product of two matrices

In mathematics, the Hadamard product is a binary operation that takes in two matrices of the same dimensions and returns a matrix of the multiplied corresponding elements. This operation can be thought as a "naive matrix multiplication" and is different from the matrix product. It is attributed to, and named after, either French mathematician Jacques Hadamard or German mathematician Issai Schur.