Part of a series of articles about |
Calculus |
---|
In mathematics, matrix calculus is a specialized notation for doing multivariable calculus, especially over spaces of matrices. It collects the various partial derivatives of a single function with respect to many variables, and/or of a multivariate function with respect to a single variable, into vectors and matrices that can be treated as single entities. This greatly simplifies operations such as finding the maximum or minimum of a multivariate function and solving systems of differential equations. The notation used here is commonly used in statistics and engineering, while the tensor index notation is preferred in physics.
Two competing notational conventions split the field of matrix calculus into two separate groups. The two groups can be distinguished by whether they write the derivative of a scalar with respect to a vector as a column vector or a row vector. Both of these conventions are possible even when the common assumption is made that vectors should be treated as column vectors when combined with matrices (rather than row vectors). A single convention can be somewhat standard throughout a single field that commonly uses matrix calculus (e.g. econometrics, statistics, estimation theory and machine learning). However, even within a given field different authors can be found using competing conventions. Authors of both groups often write as though their specific conventions were standard. Serious mistakes can result when combining results from different authors without carefully verifying that compatible notations have been used. Definitions of these two conventions and comparisons between them are collected in the layout conventions section.
Matrix calculus refers to a number of different notations that use matrices and vectors to collect the derivative of each component of the dependent variable with respect to each component of the independent variable. In general, the independent variable can be a scalar, a vector, or a matrix while the dependent variable can be any of these as well. Each different situation will lead to a different set of rules, or a separate calculus, using the broader sense of the term. Matrix notation serves as a convenient way to collect the many derivatives in an organized way.
As a first example, consider the gradient from vector calculus. For a scalar function of three independent variables, , the gradient is given by the vector equation
where represents a unit vector in the direction for . This type of generalized derivative can be seen as the derivative of a scalar, f, with respect to a vector, , and its result can be easily collected in vector form.
More complicated examples include the derivative of a scalar function with respect to a matrix, known as the gradient matrix, which collects the derivative with respect to each matrix element in the corresponding position in the resulting matrix. In that case the scalar must be a function of each of the independent variables in the matrix. As another example, if we have an n-vector of dependent variables, or functions, of m independent variables we might consider the derivative of the dependent vector with respect to the independent vector. The result could be collected in an m×n matrix consisting of all of the possible derivative combinations.
There are a total of nine possibilities using scalars, vectors, and matrices. Notice that as we consider higher numbers of components in each of the independent and dependent variables we can be left with a very large number of possibilities. The six kinds of derivatives that can be most neatly organized in matrix form are collected in the following table. [1]
Types | Scalar | Vector | Matrix |
---|---|---|---|
Scalar | |||
Vector | |||
Matrix |
Here, we have used the term "matrix" in its most general sense, recognizing that vectors are simply matrices with one column (and scalars are simply vectors with one row). Moreover, we have used bold letters to indicate vectors and bold capital letters for matrices. This notation is used throughout.
Notice that we could also talk about the derivative of a vector with respect to a matrix, or any of the other unfilled cells in our table. However, these derivatives are most naturally organized in a tensor of rank higher than 2, so that they do not fit neatly into a matrix. In the following three sections we will define each one of these derivatives and relate them to other branches of mathematics. See the layout conventions section for a more detailed table.
The matrix derivative is a convenient notation for keeping track of partial derivatives for doing calculations. The Fréchet derivative is the standard way in the setting of functional analysis to take derivatives with respect to vectors. In the case that a matrix function of a matrix is Fréchet differentiable, the two derivatives will agree up to translation of notations. As is the case in general for partial derivatives, some formulae may extend under weaker analytic conditions than the existence of the derivative as approximating linear mapping.
Matrix calculus is used for deriving optimal stochastic estimators, often involving the use of Lagrange multipliers. This includes the derivation of:
The vector and matrix derivatives presented in the sections to follow take full advantage of matrix notation, using a single variable to represent a large number of variables. In what follows we will distinguish scalars, vectors and matrices by their typeface. We will let M(n,m) denote the space of real n×m matrices with n rows and m columns. Such matrices will be denoted using bold capital letters: A, X, Y, etc. An element of M(n,1), that is, a column vector, is denoted with a boldface lowercase letter: a, x, y, etc. An element of M(1,1) is a scalar, denoted with lowercase italic typeface: a, t, x, etc. XT denotes matrix transpose, tr(X) is the trace, and det(X) or |X| is the determinant. All functions are assumed to be of differentiability class C1 unless otherwise noted. Generally letters from the first half of the alphabet (a, b, c, ...) will be used to denote constants, and from the second half (t, x, y, ...) to denote variables.
NOTE: As mentioned above, there are competing notations for laying out systems of partial derivatives in vectors and matrices, and no standard appears to be emerging yet. The next two introductory sections use the numerator layout convention simply for the purposes of convenience, to avoid overly complicating the discussion. The section after them discusses layout conventions in more detail. It is important to realize the following:
The tensor index notation with its Einstein summation convention is very similar to the matrix calculus, except one writes only a single component at a time. It has the advantage that one can easily manipulate arbitrarily high rank tensors, whereas tensors of rank higher than two are quite unwieldy with matrix notation. All of the work here can be done in this notation without use of the single-variable matrix notation. However, many problems in estimation theory and other areas of applied mathematics would result in too many indices to properly keep track of, pointing in favor of matrix calculus in those areas. Also, Einstein notation can be very useful in proving the identities presented here (see section on differentiation) as an alternative to typical element notation, which can become cumbersome when the explicit sums are carried around. Note that a matrix can be considered a tensor of rank two.
Because vectors are matrices with only one column, the simplest matrix derivatives are vector derivatives.
The notations developed here can accommodate the usual operations of vector calculus by identifying the space M(n,1) of n-vectors with the Euclidean space Rn, and the scalar M(1,1) is identified with R. The corresponding concept from vector calculus is indicated at the end of each subsection.
NOTE: The discussion in this section assumes the numerator layout convention for pedagogical purposes. Some authors use different conventions. The section on layout conventions discusses this issue in greater detail. The identities given further down are presented in forms that can be used in conjunction with all common layout conventions.
The derivative of a vector , by a scalar x is written (in numerator layout notation) as
In vector calculus the derivative of a vector y with respect to a scalar x is known as the tangent vector of the vector y, . Notice here that y: R1 → Rm.
Example Simple examples of this include the velocity vector in Euclidean space, which is the tangent vector of the position vector (considered as a function of time). Also, the acceleration is the tangent vector of the velocity.
The derivative of a scalar y by a vector , is written (in numerator layout notation) as
In vector calculus, the gradient of a scalar field f : Rn → R (whose independent coordinates are the components of x) is the transpose of the derivative of a scalar by a vector.
By example, in physics, the electric field is the negative vector gradient of the electric potential.
The directional derivative of a scalar function f(x) of the space vector x in the direction of the unit vector u (represented in this case as a column vector) is defined using the gradient as follows.
Using the notation just defined for the derivative of a scalar with respect to a vector we can re-write the directional derivative as This type of notation will be nice when proving product rules and chain rules that come out looking similar to what we are familiar with for the scalar derivative.
Each of the previous two cases can be considered as an application of the derivative of a vector with respect to a vector, using a vector of size one appropriately. Similarly we will find that the derivatives involving matrices will reduce to derivatives involving vectors in a corresponding way.
The derivative of a vector function (a vector whose components are functions) , with respect to an input vector, , is written (in numerator layout notation) as
In vector calculus, the derivative of a vector function y with respect to a vector x whose components represent a space is known as the pushforward (or differential) , or the Jacobian matrix .
The pushforward along a vector function f with respect to vector v in Rn is given by
There are two types of derivatives with matrices that can be organized into a matrix of the same size. These are the derivative of a matrix by a scalar and the derivative of a scalar by a matrix. These can be useful in minimization problems found in many areas of applied mathematics and have adopted the names tangent matrix and gradient matrix respectively after their analogs for vectors.
Note: The discussion in this section assumes the numerator layout convention for pedagogical purposes. Some authors use different conventions. The section on layout conventions discusses this issue in greater detail. The identities given further down are presented in forms that can be used in conjunction with all common layout conventions.
The derivative of a matrix function Y by a scalar x is known as the tangent matrix and is given (in numerator layout notation) by
The derivative of a scalar function y, with respect to a p×q matrix X of independent variables, is given (in numerator layout notation) by
Important examples of scalar functions of matrices include the trace of a matrix and the determinant.
In analog with vector calculus this derivative is often written as the following.
Also in analog with vector calculus, the directional derivative of a scalar f(X) of a matrix X in the direction of matrix Y is given by
It is the gradient matrix, in particular, that finds many uses in minimization problems in estimation theory, particularly in the derivation of the Kalman filter algorithm, which is of great importance in the field.
The three types of derivatives that have not been considered are those involving vectors-by-matrices, matrices-by-vectors, and matrices-by-matrices. These are not as widely considered and a notation is not widely agreed upon.
This section discusses the similarities and differences between notational conventions that are used in the various fields that take advantage of matrix calculus. Although there are largely two consistent conventions, some authors find it convenient to mix the two conventions in forms that are discussed below. After this section, equations will be listed in both competing forms separately.
The fundamental issue is that the derivative of a vector with respect to a vector, i.e. , is often written in two competing ways. If the numerator y is of size m and the denominator x of size n, then the result can be laid out as either an m×n matrix or n×m matrix, i.e. the m elements of y laid out in rows and the n elements of x laid out in columns, or vice versa. This leads to the following possibilities:
When handling the gradient and the opposite case we have the same issues. To be consistent, we should do one of the following:
Not all math textbooks and papers are consistent in this respect throughout. That is, sometimes different conventions are used in different contexts within the same book or paper. For example, some choose denominator layout for gradients (laying them out as column vectors), but numerator layout for the vector-by-vector derivative
Similarly, when it comes to scalar-by-matrix derivatives and matrix-by-scalar derivatives then consistent numerator layout lays out according to Y and XT, while consistent denominator layout lays out according to YT and X. In practice, however, following a denominator layout for and laying the result out according to YT, is rarely seen because it makes for ugly formulas that do not correspond to the scalar formulas. As a result, the following layouts can often be found:
In the following formulas, we handle the five possible combinations and separately. We also handle cases of scalar-by-scalar derivatives that involve an intermediate vector or matrix. (This can arise, for example, if a multi-dimensional parametric curve is defined in terms of a scalar variable, and then a derivative of a scalar function of the curve is taken with respect to the scalar that parameterizes the curve.) For each of the various combinations, we give numerator-layout and denominator-layout results, except in the cases above where denominator layout rarely occurs. In cases involving matrices where it makes sense, we give numerator-layout and mixed-layout results. As noted above, cases where vector and matrix denominators are written in transpose notation are equivalent to numerator layout with the denominators written without the transpose.
Keep in mind that various authors use different combinations of numerator and denominator layouts for different types of derivatives, and there is no guarantee that an author will consistently use either numerator or denominator layout for all types. Match up the formulas below with those quoted in the source to determine the layout used for that particular type of derivative, but be careful not to assume that derivatives of other types necessarily follow the same kind of layout.
When taking derivatives with an aggregate (vector or matrix) denominator in order to find a maximum or minimum of the aggregate, it should be kept in mind that using numerator layout will produce results that are transposed with respect to the aggregate. For example, in attempting to find the maximum likelihood estimate of a multivariate normal distribution using matrix calculus, if the domain is a k×1 column vector, then the result using the numerator layout will be in the form of a 1×k row vector. Thus, either the results should be transposed at the end or the denominator layout (or mixed layout) should be used.
Scalar y | Column vector y (size m×1) | Matrix Y (size m×n) | |||||
---|---|---|---|---|---|---|---|
Notation | Type | Notation | Type | Notation | Type | ||
Scalar x | Numerator | Scalar | Size-m column vector | m×n matrix | |||
Denominator | Size-m row vector | ||||||
Column vector x (size n×1) | Numerator | Size-n row vector | m×n matrix | ||||
Denominator | Size-n column vector | n×m matrix | |||||
Matrix X (size p×q) | Numerator | q×p matrix | |||||
Denominator | p×q matrix |
The results of operations will be transposed when switching between numerator-layout and denominator-layout notation.
Using numerator-layout notation, we have: [1]
The following definitions are only provided in numerator-layout notation:
Using denominator-layout notation, we have: [2]
As noted above, in general, the results of operations will be transposed when switching between numerator-layout and denominator-layout notation.
To help make sense of all the identities below, keep in mind the most important rules: the chain rule, product rule and sum rule. The sum rule applies universally, and the product rule applies in most of the cases below, provided that the order of matrix products is maintained, since matrix products are not commutative. The chain rule applies in some of the cases, but unfortunately does not apply in matrix-by-scalar derivatives or scalar-by-matrix derivatives (in the latter case, mostly involving the trace operator applied to matrices). In the latter case, the product rule can't quite be applied directly, either, but the equivalent can be done with a bit more work using the differential identities.
The following identities adopt the following conventions:
This is presented first because all of the operations that apply to vector-by-vector differentiation apply directly to vector-by-scalar or scalar-by-vector differentiation simply by reducing the appropriate vector in the numerator or denominator to a scalar.
Condition | Expression | Numerator layout, i.e. by y and xT | Denominator layout, i.e. by yT and x |
---|---|---|---|
a is not a function of x | |||
A is not a function of x | |||
A is not a function of x | |||
a is not a function of x, u = u(x) | |||
v = v(x), a is not a function of x | |||
v = v(x), u = u(x) | |||
A is not a function of x, u = u(x) | |||
u = u(x), v = v(x) | |||
u = u(x) | |||
u = u(x) |
The fundamental identities are placed above the thick black line.
Condition | Expression | Numerator layout, i.e. by xT; result is row vector | Denominator layout, i.e. by x; result is column vector |
---|---|---|---|
a is not a function of x | [nb 1] | [nb 1] | |
a is not a function of x, u = u(x) | |||
u = u(x), v = v(x) | |||
u = u(x), v = v(x) | |||
u = u(x) | |||
u = u(x) | |||
u = u(x), v = v(x) | in numerator layout | in denominator layout | |
u = u(x), v = v(x), A is not a function of x | in numerator layout | in denominator layout | |
, the Hessian matrix [3] | |||
a is not a function of x | |||
A is not a function of x b is not a function of x | |||
A is not a function of x | |||
A is not a function of x A is symmetric | |||
A is not a function of x | |||
A is not a function of x A is symmetric | |||
a is not a function of x, u = u(x) | in numerator layout | in denominator layout | |
a, b are not functions of x | |||
A, b, C, D, e are not functions of x | |||
a is not a function of x |
Condition | Expression | Numerator layout, i.e. by y, result is column vector | Denominator layout, i.e. by yT, result is row vector |
---|---|---|---|
a is not a function of x | [nb 1] | ||
a is not a function of x, u = u(x) | |||
A is not a function of x, u = u(x) | |||
u = u(x) | |||
u = u(x), v = v(x) | |||
u = u(x), v = v(x) | |||
u = u(x) | |||
Assumes consistent matrix layout; see below. | |||
u = u(x) | |||
Assumes consistent matrix layout; see below. | |||
U = U(x), v = v(x) |
NOTE: The formulas involving the vector-by-vector derivatives and (whose outputs are matrices) assume the matrices are laid out consistent with the vector layout, i.e. numerator-layout matrix when numerator-layout vector and vice versa; otherwise, transpose the vector-by-vector derivatives.
Note that exact equivalents of the scalar product rule and chain rule do not exist when applied to matrix-valued functions of matrices. However, the product rule of this sort does apply to the differential form (see below), and this is the way to derive many of the identities below involving the trace function, combined with the fact that the trace function allows transposing and cyclic permutation, i.e.:
For example, to compute
Therefore,
(For the last step, see the Conversion from differential to derivative form section.)
Condition | Expression | Numerator layout, i.e. by XT | Denominator layout, i.e. by X |
---|---|---|---|
a is not a function of X | [nb 2] | [nb 2] | |
a is not a function of X, u = u(X) | |||
u = u(X), v = v(X) | |||
u = u(X), v = v(X) | |||
u = u(X) | |||
u = u(X) | |||
U = U(X) | [3] | ||
Both forms assume numerator layout for i.e. mixed layout if denominator layout for X is being used. | |||
a and b are not functions of X | |||
a and b are not functions of X | |||
a, b and C are not functions of X | |||
a, b and C are not functions of X | |||
U = U(X), V = V(X) | |||
a is not a function of X, U = U(X) | |||
g(X) is any polynomial with scalar coefficients, or any matrix function defined by an infinite polynomial series (e.g. eX, sin(X), cos(X), ln(X), etc. using a Taylor series); g(x) is the equivalent scalar function, g′(x) is its derivative, and g′(X) is the corresponding matrix function | |||
A is not a function of X | [4] | ||
A is not a function of X | [3] | ||
A is not a function of X | [3] | ||
A is not a function of X | [3] | ||
A, B are not functions of X | |||
A, B, C are not functions of X | |||
n is a positive integer | [3] | ||
A is not a function of X, n is a positive integer | [3] | ||
[3] | |||
[3] | |||
[5] | |||
a is not a function of X | [3] [nb 3] | ||
A, B are not functions of X | [3] | ||
n is a positive integer | [3] | ||
(see pseudo-inverse) | [3] | ||
(see pseudo-inverse) | [3] | ||
A is not a function of X, X is square and invertible | |||
A is not a function of X, X is non-square, A is symmetric | |||
A is not a function of X, X is non-square, A is non-symmetric |
Condition | Expression | Numerator layout, i.e. by Y |
---|---|---|
U = U(x) | ||
A, B are not functions of x, U = U(x) | ||
U = U(x), V = V(x) | ||
U = U(x), V = V(x) | ||
U = U(x), V = V(x) | ||
U = U(x), V = V(x) | ||
U = U(x) | ||
U = U(x,y) | ||
A is not a function of x, g(X) is any polynomial with scalar coefficients, or any matrix function defined by an infinite polynomial series (e.g. eX, sin(X), cos(X), ln(X), etc.); g(x) is the equivalent scalar function, g′(x) is its derivative, and g′(X) is the corresponding matrix function | ||
A is not a function of x |
Condition | Expression | Any layout (assumes dot product ignores row vs. column layout) |
---|---|---|
u = u(x) | ||
u = u(x), v = v(x) |
Condition | Expression | Consistent numerator layout, i.e. by Y and XT | Mixed layout, i.e. by Y and X |
---|---|---|---|
U = U(x) | |||
U = U(x) | |||
U = U(x) | |||
U = U(x) | |||
A is not a function of x, g(X) is any polynomial with scalar coefficients, or any matrix function defined by an infinite polynomial series (e.g. eX, sin(X), cos(X), ln(X), etc.); g(x) is the equivalent scalar function, g′(x) is its derivative, and g′(X) is the corresponding matrix function. | |||
A is not a function of x |
It is often easier to work in differential form and then convert back to normal derivatives. This only works well using the numerator layout. In these rules, a is a scalar.
Condition | Expression | Result (numerator layout) |
---|---|---|
A is not a function of X | ||
a is not a function of X | ||
(Kronecker product) | ||
(Hadamard product) | ||
(conjugate transpose) | ||
n is a positive integer | ||
is diagonalizable
|
In the last row, is the Kronecker delta and is the set of orthogonal projection operators that project onto the k-th eigenvector of X. Q is the matrix of eigenvectors of , and are the eigenvalues. The matrix function is defined in terms of the scalar function for diagonalizable matrices by where with .
To convert to normal derivative form, first convert it to one of the following canonical forms, and then use these identities:
Canonical differential form | Equivalent derivative form (numerator layout) |
---|---|
Matrix differential calculus is used in statistics and econometrics, particularly for the statistical analysis of multivariate distributions, especially the multivariate normal distribution and other elliptical distributions. [8] [9] [10]
It is used in regression analysis to compute, for example, the ordinary least squares regression formula for the case of multiple explanatory variables. [11] It is also used in random matrices, statistical moments, local sensitivity and statistical diagnostics. [12] [13]
In vector calculus, the curl, also known as rotor, is a vector operator that describes the infinitesimal circulation of a vector field in three-dimensional Euclidean space. The curl at a point in the field is represented by a vector whose length and direction denote the magnitude and axis of the maximum circulation. The curl of a field is formally defined as the circulation density at each point of the field.
The derivative is a fundamental tool of calculus that quantifies the sensitivity of change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is the slope of the tangent line to the graph of the function at that point. The tangent line is the best linear approximation of the function near that input value. For this reason, the derivative is often described as the instantaneous rate of change, the ratio of the instantaneous change in the dependent variable to that of the independent variable. The process of finding a derivative is called differentiation.
In vector calculus, divergence is a vector operator that operates on a vector field, producing a scalar field giving the quantity of the vector field's source at each point. More technically, the divergence represents the volume density of the outward flux of a vector field from an infinitesimal volume around a given point.
In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism. The determinant of a product of matrices is the product of their determinants.
In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field whose value at a point gives the direction and the rate of fastest increase. The gradient transforms like a vector under change of basis of the space of variables of . If the gradient of a function is non-zero at a point , the direction of the gradient is the direction in which the function increases most quickly from , and the magnitude of the gradient is the rate of increase in that direction, the greatest absolute directional derivative. Further, a point where the gradient is the zero vector is known as a stationary point. The gradient thus plays a fundamental role in optimization theory, where it is used to minimize a function by gradient descent. In coordinate-free terms, the gradient of a function may be defined by:
In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.
Del, or nabla, is an operator used in mathematics as a vector differential operator, usually represented by the nabla symbol ∇. When applied to a function defined on a one-dimensional domain, it denotes the standard derivative of the function as defined in calculus. When applied to a field, it may denote any one of three operations depending on the way it is applied: the gradient or (locally) steepest slope of a scalar field ; the divergence of a vector field; or the curl (rotation) of a vector field.
In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-sides of the equations. It is named after Gabriel Cramer, who published the rule for an arbitrary number of unknowns in 1750, although Colin Maclaurin also published special cases of the rule in 1748, and possibly knew of it as early as 1729.
In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols , (where is the nabla operator), or . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian Δf (p) of a function f at a point p measures by how much the average value of f over small spheres or balls centered at p deviates from f (p).
In probability theory and statistics, a covariance matrix is a square matrix giving the covariance between each pair of elements of a given random vector.
In vector calculus, the Jacobian matrix of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variables as input as the number of vector components of its output, its determinant is referred to as the Jacobian determinant. Both the matrix and the determinant are often referred to simply as the Jacobian in literature.
In linear algebra, an n-by-n square matrix A is called invertible if there exists an n-by-n square matrix B such that
In mathematics, the Hessian matrix, Hessian or Hesse matrix is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was developed in the 19th century by the German mathematician Ludwig Otto Hesse and later named after him. Hesse originally used the term "functional determinants". The Hessian is sometimes denoted by H or, ambiguously, by ∇2.
In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives the exponential map between a matrix Lie algebra and the corresponding Lie group.
In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a specialization of the tensor product from vectors to matrices and gives the matrix of the tensor product linear map with respect to a standard choice of basis. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. The Kronecker product is also sometimes called matrix direct product.
In electromagnetism, the electromagnetic tensor or electromagnetic field tensor is a mathematical object that describes the electromagnetic field in spacetime. The field tensor was first used after the four-dimensional tensor formulation of special relativity was introduced by Hermann Minkowski. The tensor allows related physical laws to be written very concisely, and allows for the quantization of the electromagnetic field by Lagrangian formulation described below.
The following are important identities involving derivatives and integrals in vector calculus.
In differential calculus, there is no single uniform notation for differentiation. Instead, various notations for the derivative of a function or variable have been proposed by various mathematicians. The usefulness of each notation varies with the context, and it is sometimes advantageous to use more than one notation in a given context. The most common notations for differentiation are listed below.
In geometry, the polar sine generalizes the sine function of angle to the vertex angle of a polytope. It is denoted by psin.
In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object.
{{cite journal}}
: Cite journal requires |journal=
(help)