In computing, row-major order and column-major order are methods for storing multidimensional arrays in linear storage such as random access memory.
The difference between the orders lies in which elements of an array are contiguous in memory. In row-major order, the consecutive elements of a row reside next to each other, whereas the same holds true for consecutive elements of a column in column-major order. While the terms allude to the rows and columns of a two-dimensional array, i.e. a matrix, the orders can be generalized to arrays of any dimension by noting that the terms row-major and column-major are equivalent to lexicographic and colexicographic orders, respectively. It is also worth noting that matrices, being commonly represented as collections of row or column vectors, using this approach are effectively stored as consecutive vectors or consecutive vector components. Such ways of storing data are referred to as AoS and SoA respectively.
Data layout is critical for correctly passing arrays between programs written in different programming languages. It is also important for performance when traversing an array because modern CPUs process sequential data more efficiently than nonsequential data. This is primarily due to CPU caching which exploits spatial locality of reference. [1] In addition, contiguous access makes it possible to use SIMD instructions that operate on vectors of data. In some media such as magnetic-tape data storage, accessing sequentially is orders of magnitude faster than nonsequential access.[ citation needed ]
The terms row-major and column-major stem from the terminology related to ordering objects. A general way to order objects with many attributes is to first group and order them by one attribute, and then, within each such group, group and order them by another attribute, etc. If more than one attribute participates in ordering, the first would be called major and the last minor. If two attributes participate in ordering, it is sufficient to name only the major attribute.
In the case of arrays, the attributes are the indices along each dimension. For matrices in mathematical notation, the first index indicates the row, and the second indicates the column, e.g., given a matrix , the entry is in its first row and second column. This convention is carried over to the syntax in programming languages, [2] although often with indexes starting at 0 instead of 1. [3]
Even though the row is indicated by the first index and the column by the second index, no grouping order between the dimensions is implied by this. The choice of how to group and order the indices, either by row-major or column-major methods, is thus a matter of convention. The same terminology can be applied to even higher dimensional arrays. Row-major grouping starts from the leftmost index and column-major from the rightmost index, leading to lexicographic and colexicographic (or colex) orders, respectively.
For example, the array
could be stored in two possible ways:
Address | Row-major order | Column-major order |
---|---|---|
0 | ||
1 | ||
2 | ||
3 | ||
4 | ||
5 |
Programming languages handle this in different ways. In C, multidimensional arrays are stored in row-major order, and the array indexes are written row-first (lexicographical access order):
Addressx + N_x*y | AccessA[y][x] | Value |
---|---|---|
0 | A[0][0] | |
1 | A[0][1] | |
2 | A[0][2] | |
3 | A[1][0] | |
4 | A[1][1] | |
5 | A[1][2] |
On the other hand, in Fortran, arrays are stored in column-major order, while the array indexes are still written row-first (colexicographical access order):
Addressy + N_y*(x-1) | AccessA(y,x) | Value |
---|---|---|
1 | A(1,1) | |
2 | A(2,1) | |
3 | A(1,2) | |
4 | A(2,2) | |
5 | A(1,3) | |
6 | A(2,3) |
Note how the use of A[i][j]
with multi-step indexing as in C, as opposed to a neutral notation like A(i,j)
as in Fortran, almost inevitably implies row-major order for syntactic reasons, so to speak, because it can be rewritten as (A[i])[j]
, and the A[i]
row part can even be assigned to an intermediate variable that is then indexed in a separate expression. (No other implications should be assumed, e.g., Fortran is not column-major simply because of its notation, and even the above implication could intentionally be circumvented in a new language.)
To use column-major order in a row-major environment, or vice versa, for whatever reason, one workaround is to assign non-conventional roles to the indexes (using the first index for the column and the second index for the row), and another is to bypass language syntax by explicitly computing positions in a one-dimensional array. Of course, deviating from convention probably incurs a cost that increases with the degree of necessary interaction with conventional language features and other code, not only in the form of increased vulnerability to mistakes (forgetting to also invert matrix multiplication order, reverting to convention during code maintenance, etc.), but also in the form of having to actively rearrange elements, all of which have to be weighed against any original purpose such as increasing performance. Running the loop row-wise is preferred in row-major languages like C and vice versa for column-major languages.
Programming languages or their standard libraries that support multi-dimensional arrays typically have a native row-major or column-major storage order for these arrays.
Row-major order is used in C/C++/Objective-C (for C-style arrays), PL/I, [4] Pascal, [5] Speakeasy,[ citation needed ] and SAS. [6]
Column-major order is used in Fortran, [7] [8] IDL, [7] MATLAB, [8] GNU Octave, Julia, [9] S, S-PLUS, [10] R, [11] Scilab, [12] Yorick, and Rasdaman. [13]
A typical alternative for dense array storage is to use Iliffe vectors, which typically store pointers to elements in the same row contiguously (like row-major order), but not the rows themselves. They are used in (ordered by age): Java, [14] C#/CLI/.Net, Scala, [15] and Swift.
Even less dense is to use lists of lists, e.g., in Python, [16] and in the Wolfram Language of Wolfram Mathematica. [17]
An alternative approach uses tables of tables, e.g., in Lua. [18]
Support for multi-dimensional arrays may also be provided by external libraries, which may even support arbitrary orderings, where each dimension has a stride value, and row-major or column-major are just two possible resulting interpretations.
Row-major order is the default in NumPy [19] (for Python).
Column-major order is the default in Eigen [20] and Armadillo (both for C++).
A special case would be OpenGL (and OpenGL ES) for graphics processing. Since "recent mathematical treatments of linear algebra and related fields invariably treat vectors as columns," designer Mark Segal decided to substitute this for the convention in predecessor IRIS GL, which was to write vectors as rows; for compatibility, transformation matrices would still be stored in vector-major (=row-major) rather than coordinate-major (=column-major) order, and he then used the trick "[to] say that matrices in OpenGL are stored in column-major order". [21] This was really only relevant for presentation, because matrix multiplication was stack-based and could still be interpreted as post-multiplication, but, worse, reality leaked through the C-based API because individual elements would be accessed as M[vector][coordinate]
or, effectively, M[column][row]
, which unfortunately muddled the convention that the designer sought to adopt, and this was even preserved in the OpenGL Shading Language that was later added (although this also makes it possible to access coordinates by name instead, e.g., M[vector].y
). As a result, many developers will now simply declare that having the column as the first index is the definition of column-major, even though this is clearly not the case with a real column-major language like Fortran.
Torch (for Lua) changed from column-major [22] to row-major [23] default order.
As exchanging the indices of an array is the essence of array transposition, an array stored as row-major but read as column-major (or vice versa) will appear transposed. As actually performing this rearrangement in memory is typically an expensive operation, some systems provide options to specify individual matrices as being stored transposed. The programmer must then decide whether or not to rearrange the elements in memory, based on the actual usage (including the number of times that the array is reused in a computation).
For example, the Basic Linear Algebra Subprograms functions are passed flags indicating which arrays are transposed. [24]
The concept generalizes to arrays with more than two dimensions.
For a d-dimensional array with dimensions Nk (k=1...d), a given element of this array is specified by a tuple of d (zero-based) indices .
In row-major order, the last dimension is contiguous, so that the memory-offset of this element is given by:
In column-major order, the first dimension is contiguous, so that the memory-offset of this element is given by: where the empty product is the multiplicative identity element, i.e., .
For a given order, the stride in dimension k is given by the multiplication value in parentheses before index nk in the right-hand side summations above.
More generally, there are d! possible orders for a given array, one for each permutation of dimensions (with row-major and column-order just 2 special cases), although the lists of stride values are not necessarily permutations of each other, e.g., in the 2-by-3 example above, the strides are (3,1) for row-major and (1,2) for column-major.
In computer science, an array is a data structure consisting of a collection of elements, of same memory size, each identified by at least one array index or key. An array is stored such that the position of each element can be computed from its index tuple by a mathematical formula. The simplest type of data structure is a linear array, also called one-dimensional array.
In mathematics, the determinant is a scalar-valued function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism.
In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of the input sequence. An inverse DFT (IDFT) is a Fourier series, using the DTFT samples as coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the same sample-values as the original input sequence. The DFT is therefore said to be a frequency domain representation of the original input sequence. If the original sequence spans all the non-zero values of a function, its DTFT is continuous, and the DFT provides discrete samples of one cycle. If the original sequence is one cycle of a periodic function, the DFT provides all the non-zero values of one DTFT cycle.
In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects related to a vector space. Tensors may map between different objects such as vectors, scalars, and even other tensors. There are many types of tensors, including scalars and vectors, dual vectors, multilinear maps between vector spaces, and even some operations such as the dot product. Tensors are defined independent of any basis, although they are often referred to by their components in a basis related to a particular coordinate system; those components form an array, which can be thought of as a high-dimensional matrix.
In linear algebra, the outer product of two coordinate vectors is the matrix whose entries are all products of an element in the first vector with an element in the second vector. If the two coordinate vectors have dimensions n and m, then their outer product is an n × m matrix. More generally, given two tensors, their outer product is a tensor. The outer product of tensors is also referred to as their tensor product, and can be used to define the tensor algebra.
In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.
In mathematics and computer programming, index notation is used to specify the elements of an array of numbers. The formalism of how indices are used varies according to the subject. In particular, there are different methods for referring to the elements of a list, a vector, or a matrix, depending on whether one is writing a formal mathematical paper for publication, or when one is writing a computer program.
In numerical analysis and scientific computing, a sparse matrix or sparse array is a matrix in which most of the elements are zero. There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify as sparse but a common criterion is that the number of non-zero elements is roughly equal to the number of rows or columns. By contrast, if most of the elements are non-zero, the matrix is considered dense. The number of zero-valued elements divided by the total number of elements is sometimes referred to as the sparsity of the matrix.
In computer programming, array slicing is an operation that extracts a subset of elements from an array and packages them as another array, possibly in a different dimension from the original.
In linear algebra, a coordinate vector is a representation of a vector as an ordered list of numbers that describes the vector in terms of a particular ordered basis. An easy example may be a position such as in a 3-dimensional Cartesian coordinate system with the basis as the axes of this system. Coordinates are always specified relative to an ordered basis. Bases and their associated coordinate representations let one realize vector spaces and linear transformations concretely as column vectors, row vectors, and matrices; hence, they are useful in calculations.
In computer science, array programming refers to solutions that allow the application of operations to an entire set of values at once. Such solutions are commonly used in scientific and engineering settings.
Matrix representation is a method used by a computer language to store column-vector matrices of more than one dimension in memory. Fortran and C use different schemes for their native arrays. Fortran uses "Column Major" (AoS), in which all the elements for a given column are stored contiguously in memory. C uses "Row Major" (SoA), which stores all the elements for a given row contiguously in memory. LAPACK defines various matrix representations in memory. There is also Sparse matrix representation and Morton-order matrix representation. According to the documentation, in LAPACK the unitary matrix representation is optimized. Some languages such as Java store matrices using Iliffe vectors. These are particularly useful for storing irregular matrices. Matrices are of primary importance in linear algebra.
In computer programming, an Iliffe vector, also known as a display, is a data structure used to implement multi-dimensional arrays.
In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of linear equations using LU decomposition, and it is also a key step when inverting a matrix or computing the determinant of a matrix. The LU decomposition was introduced by the Polish astronomer Tadeusz Banachiewicz in 1938. To quote: "It appears that Gauss and Doolittle applied the method [of elimination] only to symmetric equations. More recent authors, for example, Aitken, Banachiewicz, Dwyer, and Crout … have emphasized the use of the method, or variations of it, in connection with non-symmetric problems … Banachiewicz … saw the point … that the basic problem is really one of matrix factorization, or “decomposition” as he called it." It is also sometimes referred to as LR decomposition.
In universal algebra, a basis is a structure inside of some (universal) algebras, which are called free algebras. It generates all algebra elements from its own elements by the algebra operations in an independent manner. It also represents the endomorphisms of an algebra by certain indexings of algebra elements, which can correspond to the usual matrices when the free algebra is a vector space.
In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, with elements or entries arranged in rows and columns, which is used to represent a mathematical object or property of such an object.
In feature-oriented software development, feature-oriented software development program cubes (FOSD program cubes) are n-dimensional arrays of functions (program transformations) that represent n-dimensional product lines. A program is a composition of features: a base program is augmented with increments in program functionality, called features, to produce a complex program. A software product line (SPL) is a family of related programs. A typical product line has F0 as a base program, and F1..Fn as features that could be added to F0. Different compositions of features yield different programs. Let + denote the feature composition operation. A program P in SPL might have the following expression:
In computer science, array is a data type that represents a collection of elements, each selected by one or more indices that can be computed at run time during program execution. Such a collection is usually called an array variable or array value. By analogy with the mathematical concepts vector and matrix, array types with one and two indices are often called vector type and matrix type, respectively. More generally, a multidimensional array type can be called a tensor type, by analogy with the physical concept, tensor.
In signal processing, nonlinear multidimensional signal processing (NMSP) covers all signal processing using nonlinear multidimensional signals and systems. Nonlinear multidimensional signal processing is a subset of signal processing (multidimensional signal processing). Nonlinear multi-dimensional systems can be used in a broad range such as imaging, teletraffic, communications, hydrology, geology, and economics. Nonlinear systems cannot be treated as linear systems, using Fourier transformation and wavelet analysis. Nonlinear systems will have chaotic behavior, limit cycle, steady state, bifurcation, multi-stability and so on. Nonlinear systems do not have a canonical representation, like impulse response for linear systems. But there are some efforts to characterize nonlinear systems, such as Volterra and Wiener series using polynomial integrals as the use of those methods naturally extend the signal into multi-dimensions. Another example is the Empirical mode decomposition method using Hilbert transform instead of Fourier Transform for nonlinear multi-dimensional systems. This method is an empirical method and can be directly applied to data sets. Multi-dimensional nonlinear filters (MDNF) are also an important part of NMSP, MDNF are mainly used to filter noise in real data. There are nonlinear-type hybrid filters used in color image processing, nonlinear edge-preserving filters use in magnetic resonance image restoration. Those filters use both temporal and spatial information and combine the maximum likelihood estimate with the spatial smoothing algorithm.
In signal processing, multidimensional empirical mode decomposition is an extension of the one-dimensional (1-D) EMD algorithm to a signal encompassing multiple dimensions. The Hilbert–Huang empirical mode decomposition (EMD) process decomposes a signal into intrinsic mode functions combined with the Hilbert spectral analysis, known as the Hilbert–Huang transform (HHT). The multidimensional EMD extends the 1-D EMD algorithm into multiple-dimensional signals. This decomposition can be applied to image processing, audio signal processing, and various other multidimensional signals.
Initial values specified for an array are assigned to successive elements of the array in row-major order (final subscript varying most rapidly).
An array-type that specifies a sequence of two or more index-types shall be an abbreviated notation for an array-type specified to have as its index-type the first index-type in the sequence and to have a component-type that is an array-type specifying the sequence of index-types without the first index-type in the sequence and specifying the same component-type as the original specification.
From right to left, the rightmost dimension represents columns; the next dimension represents rows. [...] SAS places variables into a multidimensional array by filling all rows in order, beginning at the upper left corner of the array (known as row-major order).
Because Scilab stores arrays in column major format, the elements of a column are adjacent (i.e. a separation of 1) in linear format.
If the storage order is not specified, then Eigen defaults to storing the entry in column-major.