Row- and column-major order

Last updated
Illustration of difference between row- and column-major ordering Row and column major order.svg
Illustration of difference between row- and column-major ordering

In computing, row-major order and column-major order are methods for storing multidimensional arrays in linear storage such as random access memory.

Contents

The difference between the orders lies in which elements of an array are contiguous in memory. In row-major order, the consecutive elements of a row reside next to each other, whereas the same holds true for consecutive elements of a column in column-major order. While the terms allude to the rows and columns of a two-dimensional array, i.e. a matrix, the orders can be generalized to arrays of any dimension by noting that the terms row-major and column-major are equivalent to lexicographic and colexicographic orders, respectively. It is also worth noting that matrices, being commonly represented as collections of row or column vectors, using this approach are effectively stored as consecutive vectors or consecutive vector components. Such ways of storing data are referred to as AoS and SoA respectively.

Data layout is critical for correctly passing arrays between programs written in different programming languages. It is also important for performance when traversing an array because modern CPUs process sequential data more efficiently than nonsequential data. This is primarily due to CPU caching which exploits spatial locality of reference. [1] In addition, contiguous access makes it possible to use SIMD instructions that operate on vectors of data. In some media such as magnetic-tape data storage, accessing sequentially is orders of magnitude faster than nonsequential access.[ citation needed ]

Explanation and example

The terms row-major and column-major stem from the terminology related to ordering objects. A general way to order objects with many attributes is to first group and order them by one attribute, and then, within each such group, group and order them by another attribute, etc. If more than one attribute participates in ordering, the first would be called major and the last minor. If two attributes participate in ordering, it is sufficient to name only the major attribute.

In the case of arrays, the attributes are the indices along each dimension. For matrices in mathematical notation, the first index indicates the row, and the second indicates the column, e.g., given a matrix , the entry is in its first row and second column. This convention is carried over to the syntax in programming languages, [2] although often with indexes starting at 0 instead of 1. [3]

Even though the row is indicated by the first index and the column by the second index, no grouping order between the dimensions is implied by this. The choice of how to group and order the indices, either by row-major or column-major methods, is thus a matter of convention. The same terminology can be applied to even higher dimensional arrays. Row-major grouping starts from the leftmost index and column-major from the rightmost index, leading to lexicographic and colexicographic (or colex) orders, respectively.

For example, the array

could be stored in two possible ways:

AddressRow-major orderColumn-major order
0
1
2
3
4
5

Programming languages handle this in different ways. In C, multidimensional arrays are stored in row-major order, and the array indexes are written  row-first (lexicographical access order):

C: row-major order (lexicographical access order), zero-based indexing
Address
x + N_x*y
Access
A[y][x]
Value
0A[0][0]
1A[0][1]
2A[0][2]
3A[1][0]
4A[1][1]
5A[1][2]

On the other hand, in Fortran, arrays are stored in column-major order, while the array indexes are still written row-first (colexicographical access order):

Fortran: column-major order (colexicographical access order), one-based indexing
Address
y + N_y*(x-1)
Access
A(y,x)
Value
1A(1,1)
2A(2,1)
3A(1,2)
4A(2,2)
5A(1,3)
6A(2,3)

Note how the use of A[i][j] with multi-step indexing as in C, as opposed to a neutral notation like A(i,j) as in Fortran, almost inevitably implies row-major order for syntactic reasons, so to speak, because it can be rewritten as (A[i])[j], and the A[i] row part can even be assigned to an intermediate variable that is then indexed in a separate expression. (No other implications should be assumed, e.g., Fortran is not column-major simply because of its notation, and even the above implication could intentionally be circumvented in a new language.)

To use column-major order in a row-major environment, or vice versa, for whatever reason, one workaround is to assign non-conventional roles to the indexes (using the first index for the column and the second index for the row), and another is to bypass language syntax by explicitly computing positions in a one-dimensional array. Of course, deviating from convention probably incurs a cost that increases with the degree of necessary interaction with conventional language features and other code, not only in the form of increased vulnerability to mistakes (forgetting to also invert matrix multiplication order, reverting to convention during code maintenance, etc.), but also in the form of having to actively rearrange elements, all of which have to be weighed against any original purpose such as increasing performance. Running the loop row-wise is preferred in row-major languages like C and vice versa for column-major languages.

Programming languages and libraries

Programming languages or their standard libraries that support multi-dimensional arrays typically have a native row-major or column-major storage order for these arrays.

Row-major order is used in C/C++/Objective-C (for C-style arrays), PL/I, [4] Pascal, [5] Speakeasy,[ citation needed ] and SAS. [6]

Column-major order is used in Fortran, MATLAB, [7] GNU Octave, Julia, [8] S, S-PLUS, [9] R, [10] Scilab, [11] Yorick, and Rasdaman. [12]

Neither row-major nor column-major

A typical alternative for dense array storage is to use Iliffe vectors, which typically store pointers to elements in the same row contiguously (like row-major order), but not the rows themselves. They are used in (ordered by age): Java, [13] C#/CLI/.Net, Scala, [14] and Swift.

Even less dense is to use lists of lists, e.g., in Python, [15] and in the Wolfram Language of Wolfram Mathematica. [16]

An alternative approach uses tables of tables, e.g., in Lua. [17]

External libraries

Support for multi-dimensional arrays may also be provided by external libraries, which may even support arbitrary orderings, where each dimension has a stride value, and row-major or column-major are just two possible resulting interpretations.

Row-major order is the default in NumPy [18] (for Python).

Column-major order is the default in Eigen [19] and Armadillo(both for C++).

A special case would be OpenGL (and OpenGL ES) for graphics processing. Since "recent mathematical treatments of linear algebra and related fields invariably treat vectors as columns," designer Mark Segal decided to substitute this for the convention in predecessor IRIS GL, which was to write vectors as rows; for compatibility, transformation matrices would still be stored in vector-major (=row-major) rather than coordinate-major (=column-major) order, and he then used the trick "[to] say that matrices in OpenGL are stored in column-major order". [20] This was really only relevant for presentation, because matrix multiplication was stack-based and could still be interpreted as post-multiplication, but, worse, reality leaked through the C-based API because individual elements would be accessed as M[vector][coordinate] or, effectively, M[column][row], which unfortunately muddled the convention that the designer sought to adopt, and this was even preserved in the OpenGL Shading Language that was later added (although this also makes it possible to access coordinates by name instead, e.g., M[vector].y). As a result, many developers will now simply declare that having the column as the first index is the definition of column-major, even though this is clearly not the case with a real column-major language like Fortran.

Torch (for Lua) changed from column-major [21] to row-major [22] default order.

Transposition

As exchanging the indices of an array is the essence of array transposition, an array stored as row-major but read as column-major (or vice versa) will appear transposed (as long as the matrix is square). As actually performing this rearrangement in memory is typically an expensive operation, some systems provide options to specify individual matrices as being stored transposed. The programmer must then decide whether or not to rearrange the elements in memory, based on the actual usage (including the number of times that the array is reused in a computation).

For example, the Basic Linear Algebra Subprograms functions are passed flags indicating which arrays are transposed. [23]

Address calculation in general

The concept generalizes to arrays with more than two dimensions.

For a d-dimensional array with dimensions Nk (k=1...d), a given element of this array is specified by a tuple of d (zero-based) indices .

In row-major order, the last dimension is contiguous, so that the memory-offset of this element is given by:

In column-major order, the first dimension is contiguous, so that the memory-offset of this element is given by:

where the empty product is the multiplicative identity element, i.e., .

For a given order, the stride in dimension k is given by the multiplication value in parentheses before index nk in the right-hand side summations above.

More generally, there are d! possible orders for a given array, one for each permutation of dimensions (with row-major and column-order just 2 special cases), although the lists of stride values are not necessarily permutations of each other, e.g., in the 2-by-3 example above, the strides are (3,1) for row-major and (1,2) for column-major.

See also

Related Research Articles

In computer science, an array is a data structure consisting of a collection of elements, of same memory size, each identified by at least one array index or key. An array is stored such that the position of each element can be computed from its index tuple by a mathematical formula. The simplest type of data structure is a linear array, also called one-dimensional array.

In mathematics, the determinant is a scalar value that is a certain function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism. The determinant of a product of matrices is the product of their determinants.

<span class="mw-page-title-main">Discrete Fourier transform</span> Type of Fourier transform in discrete mathematics

In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of the input sequence. An inverse DFT (IDFT) is a Fourier series, using the DTFT samples as coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the same sample-values as the original input sequence. The DFT is therefore said to be a frequency domain representation of the original input sequence. If the original sequence spans all the non-zero values of a function, its DTFT is continuous, and the DFT provides discrete samples of one cycle. If the original sequence is one cycle of a periodic function, the DFT provides all the non-zero values of one DTFT cycle.

<span class="mw-page-title-main">Tensor</span> Algebraic object with geometric applications

In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects related to a vector space. Tensors may map between different objects such as vectors, scalars, and even other tensors. There are many types of tensors, including scalars and vectors, dual vectors, multilinear maps between vector spaces, and even some operations such as the dot product. Tensors are defined independent of any basis, although they are often referred to by their components in a basis related to a particular coordinate system; those components form an array, which can be thought of as a high-dimensional matrix.

In linear algebra, the outer product of two coordinate vectors is the matrix whose entries are all products of an element in the first vector with an element in the second vector. If the two coordinate vectors have dimensions n and m, then their outer product is an n × m matrix. More generally, given two tensors, their outer product is a tensor. The outer product of tensors is also referred to as their tensor product, and can be used to define the tensor algebra.

<span class="mw-page-title-main">Matrix multiplication</span> Mathematical operation in linear algebra

In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.

In linear algebra, a square matrix  is called diagonalizable or non-defective if it is similar to a diagonal matrix. That is, if there exists an invertible matrix  and a diagonal matrix such that . This is equivalent to . This property exists for any linear map: for a finite-dimensional vector space , a linear map  is called diagonalizable if there exists an ordered basis of  consisting of eigenvectors of . These definitions are equivalent: if  has a matrix representation as above, then the column vectors of  form a basis consisting of eigenvectors of , and the diagonal entries of  are the corresponding eigenvalues of ; with respect to this eigenvector basis,  is represented by .

In mathematics and computer programming, index notation is used to specify the elements of an array of numbers. The formalism of how indices are used varies according to the subject. In particular, there are different methods for referring to the elements of a list, a vector, or a matrix, depending on whether one is writing a formal mathematical paper for publication, or when one is writing a computer program.

<span class="mw-page-title-main">Sparse matrix</span> Matrix in which most of the elements are zero

In numerical analysis and scientific computing, a sparse matrix or sparse array is a matrix in which most of the elements are zero. There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify as sparse but a common criterion is that the number of non-zero elements is roughly equal to the number of rows or columns. By contrast, if most of the elements are non-zero, the matrix is considered dense. The number of zero-valued elements divided by the total number of elements is sometimes referred to as the sparsity of the matrix.

In computer programming, array slicing is an operation that extracts a subset of elements from an array and packages them as another array, possibly in a different dimension from the original.

In linear algebra, a coordinate vector is a representation of a vector as an ordered list of numbers that describes the vector in terms of a particular ordered basis. An easy example may be a position such as in a 3-dimensional Cartesian coordinate system with the basis as the axes of this system. Coordinates are always specified relative to an ordered basis. Bases and their associated coordinate representations let one realize vector spaces and linear transformations concretely as column vectors, row vectors, and matrices; hence, they are useful in calculations.

In computer science, array programming refers to solutions that allow the application of operations to an entire set of values at once. Such solutions are commonly used in scientific and engineering settings.

<span class="mw-page-title-main">Matrix representation</span>

Matrix representation is a method used by a computer language to store column-vector matrices of more than one dimension in memory. Fortran and C use different schemes for their native arrays. Fortran uses "Column Major" (AoS), in which all the elements for a given column are stored contiguously in memory. C uses "Row Major" (SoA), which stores all the elements for a given row contiguously in memory. LAPACK defines various matrix representations in memory. There is also Sparse matrix representation and Morton-order matrix representation. According to the documentation, in LAPACK the unitary matrix representation is optimized. Some languages such as Java store matrices using Iliffe vectors. These are particularly useful for storing irregular matrices. Matrices are of primary importance in linear algebra.

In computer programming, an Iliffe vector, also known as a display, is a data structure used to implement multi-dimensional arrays.

In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of linear equations using LU decomposition, and it is also a key step when inverting a matrix or computing the determinant of a matrix. The LU decomposition was introduced by the Polish astronomer Tadeusz Banachiewicz in 1938. To quote: "It appears that Gauss and Doolittle applied the method [of elimination] only to symmetric equations. More recent authors, for example, Aitken, Banachiewicz, Dwyer, and Crout … have emphasized the use of the method, or variations of it, in connection with non-symmetric problems … Banachiewicz … saw the point … that the basic problem is really one of matrix factorization, or “decomposition” as he called it." It is also sometimes referred to as LR decomposition.

In mathematics, especially in linear algebra and matrix theory, the commutation matrix is used for transforming the vectorized form of a matrix into the vectorized form of its transpose. Specifically, the commutation matrix K(m,n) is the nm × mn matrix which, for any m × n matrix A, transforms vec(A) into vec(AT):

<span class="mw-page-title-main">Matrix (mathematics)</span> Array of numbers

In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or property of such an object.

In computer science, array is a data type that represents a collection of elements, each selected by one or more indices that can be computed at run time during program execution. Such a collection is usually called an array variable or array value. By analogy with the mathematical concepts vector and matrix, array types with one and two indices are often called vector type and matrix type, respectively. More generally, a multidimensional array type can be called a tensor type, by analogy with the physical concept, tensor.

In signal processing, nonlinear multidimensional signal processing (NMSP) covers all signal processing using nonlinear multidimensional signals and systems. Nonlinear multidimensional signal processing is a subset of signal processing (multidimensional signal processing). Nonlinear multi-dimensional systems can be used in a broad range such as imaging, teletraffic, communications, hydrology, geology, and economics. Nonlinear systems cannot be treated as linear systems, using Fourier transformation and wavelet analysis. Nonlinear systems will have chaotic behavior, limit cycle, steady state, bifurcation, multi-stability and so on. Nonlinear systems do not have a canonical representation, like impulse response for linear systems. But there are some efforts to characterize nonlinear systems, such as Volterra and Wiener series using polynomial integrals as the use of those methods naturally extend the signal into multi-dimensions. Another example is the Empirical mode decomposition method using Hilbert transform instead of Fourier Transform for nonlinear multi-dimensional systems. This method is an empirical method and can be directly applied to data sets. Multi-dimensional nonlinear filters (MDNF) are also an important part of NMSP, MDNF are mainly used to filter noise in real data. There are nonlinear-type hybrid filters used in color image processing, nonlinear edge-preserving filters use in magnetic resonance image restoration. Those filters use both temporal and spatial information and combine the maximum likelihood estimate with the spatial smoothing algorithm.

In signal processing, multidimensional empirical mode decomposition is an extension of the one-dimensional (1-D) EMD algorithm to a signal encompassing multiple dimensions. The Hilbert–Huang empirical mode decomposition (EMD) process decomposes a signal into intrinsic mode functions combined with the Hilbert spectral analysis, known as the Hilbert–Huang transform (HHT). The multidimensional EMD extends the 1-D EMD algorithm into multiple-dimensional signals. This decomposition can be applied to image processing, audio signal processing, and various other multidimensional signals.

References

  1. "Cache Memory". Peter Lars Dordal. Retrieved 2021-04-10.
  2. "Arrays and Formatted I/O". FORTRAN Tutorial. Retrieved 19 November 2016.
  3. "Why numbering should start at zero". E. W. Dijkstra Archive. Retrieved 2 February 2017.
  4. "Language Reference Version 4 Release 3" (PDF). IBM. Retrieved 13 November 2017. Initial values specified for an array are assigned to successive elements of the array in row-major order (final subscript varying most rapidly).
  5. "ISO/IEC 7185:1990(E)" (PDF). An array-type that specifies a sequence of two or more index-types shall be an abbreviated notation for an array-type specified to have as its index-type the first index-type in the sequence and to have a component-type that is an array-type specifying the sequence of index-types without the first index-type in the sequence and specifying the same component-type as the original specification.
  6. "SAS® 9.4 Language Reference: Concepts, Sixth Edition" (PDF). SAS Institute Inc. September 6, 2017. p. 573. Retrieved 18 November 2017. From right to left, the rightmost dimension represents columns; the next dimension represents rows. [...] SAS places variables into a multidimensional array by filling all rows in order, beginning at the upper left corner of the array (known as row-major order).
  7. MATLAB documentation, MATLAB Data Storage (retrieved from Mathworks.co.uk, January 2014).
  8. "Multi-dimensional Arrays". Julia. Retrieved 9 November 2020.
  9. Spiegelhalter et al. (2003 , p. 17): Spiegelhalter, David; Thomas, Andrew; Best, Nicky; Lunn, Dave (January 2003), "Formatting of data: S-Plus format", WinBUGS User Manual (PDF) (Version 1.4 ed.), Cambridge, UK: MRC Biostatistics Unit, Institute of Public Health, archived from the original (PDF) on 2003-05-18
  10. An Introduction to R, Section 5.1: Arrays (retrieved March 2010).
  11. "FFTs with multidimensional data". Scilab Wiki. Retrieved 25 November 2017. Because Scilab stores arrays in column major format, the elements of a column are adjacent (i.e. a separation of 1) in linear format.
  12. "Internal array representation in rasdaman". rasdaman.org. Retrieved 29 May 2022.
  13. "Java Language Specification". Oracle. Retrieved 13 February 2016.
  14. "object Array". Scala Standard Library. Retrieved 1 May 2016.
  15. "The Python Standard Library: 8. Data Types" . Retrieved 18 November 2017.
  16. "Vectors and Matrices". Wolfram. Retrieved 12 November 2017.
  17. "11.2 – Matrices and Multi-Dimensional Arrays" . Retrieved 6 February 2016.
  18. "The N-dimensional array (ndarray)". SciPy.org. Retrieved 3 April 2016.
  19. "Eigen: Storage orders". eigen.tuxfamily.org. Retrieved 2017-11-23. If the storage order is not specified, then Eigen defaults to storing the entry in column-major.
  20. "Column Vectors Vs. Row Vectors" . Retrieved 12 November 2017.
  21. "Tensor" . Retrieved 6 February 2016.
  22. "Tensor". Torch Package Reference Manual. Retrieved 8 May 2016.
  23. "BLAS (Basic Linear Algebra Subprograms)" . Retrieved 2015-05-16.

Sources