An infinitesimal rotation matrix or differential rotation matrix is a matrix representing an infinitely small rotation.
While a rotation matrix is an orthogonal matrix representing an element of (the special orthogonal group), the differential of a rotation is a skew-symmetric matrix in the tangent space (the special orthogonal Lie algebra), which is not itself a rotation matrix.
An infinitesimal rotation matrix has the form
where is the identity matrix, is vanishingly small, and
For example, if representing an infinitesimal three-dimensional rotation about the x-axis, a basis element of
The computation rules for infinitesimal rotation matrices are as usual except that infinitesimals of second order are routinely dropped. With these rules, these matrices do not satisfy all the same properties as ordinary finite rotation matrices under the usual treatment of infinitesimals. [1] It turns out that the order in which infinitesimal rotations are applied is irrelevant.
An infinitesimal rotation matrix is a skew-symmetric matrix where:
The shape of the matrix is as follows:
Associated to an infinitesimal rotation matrix is an infinitesimal rotation tensor :
Dividing it by the time difference yields the angular velocity tensor :
These matrices do not satisfy all the same properties as ordinary finite rotation matrices under the usual treatment of infinitesimals. [2] To understand what this means, consider
First, test the orthogonality condition, QTQ = I. The product is
differing from an identity matrix by second-order infinitesimals, discarded here. So, to first order, an infinitesimal rotation matrix is an orthogonal matrix.
Next, examine the square of the matrix,
Again discarding second-order effects, note that the angle simply doubles. This hints at the most essential difference in behavior, which we can exhibit with the assistance of a second infinitesimal rotation,
Compare the products dAx dAy to dAy dAx,
Since is second-order, we discard it: thus, to first order, multiplication of infinitesimal rotation matrices is commutative. In fact,
again to first order. In other words, the order in which infinitesimal rotations are applied is irrelevant.
This useful fact makes, for example, derivation of rigid body rotation relatively simple. But one must always be careful to distinguish (the first-order treatment of) these infinitesimal rotation matrices from both finite rotation matrices and from Lie algebra elements. When contrasting the behavior of finite rotation matrices in the Baker–Campbell–Hausdorff formula above with that of infinitesimal rotation matrices, where all the commutator terms will be second-order infinitesimals, one finds a bona fide vector space. Technically, this dismissal of any second-order terms amounts to Group contraction.
Suppose we specify an axis of rotation by a unit vector [x, y, z], and suppose we have an infinitely small rotation of angle Δθ about that vector. Expanding the rotation matrix as an infinite addition, and taking the first-order approach, the rotation matrix ΔR is represented as:
A finite rotation through angle θ about this axis may be seen as a succession of small rotations about the same axis. Approximating Δθ as θ/N, where N is a large number, a rotation of θ about the axis may be represented as:
It can be seen that Euler's theorem essentially states that all rotations may be represented in this form. The product Aθ is the "generator" of the particular rotation, being the vector (x, y, z) associated with the matrix A. This shows that the rotation matrix and the axis-angle format are related by the exponential function.
One can derive a simple expression for the generator G. One starts with an arbitrary plane [3] defined by a pair of perpendicular unit vectors a and b. In this plane one can choose an arbitrary vector x with perpendicular y. One then solves for y in terms of x and substituting into an expression for a rotation in a plane yields the rotation matrix R, which includes the generator G = baT − abT.
To include vectors outside the plane in the rotation one needs to modify the above expression for R by including two projection operators that partition the space. This modified rotation matrix can be rewritten as an exponential function.
Analysis is often easier in terms of these generators, rather than the full rotation matrix. Analysis in terms of the generators is known as the Lie algebra of the rotation group.
Connecting the Lie algebra to the Lie group is the exponential map, which is defined using the standard matrix exponential series for eA [4] For any skew-symmetric matrix A, exp(A) is always a rotation matrix. [lower-alpha 1]
An important practical example is the 3 × 3 case. In rotation group SO(3), it is shown that one can identify every A ∈ so(3) with an Euler vector ω = θu, where u = (x,y,z) is a unit magnitude vector.
By the properties of the identification su(2) ≅ R3, u is in the null space of A. Thus, u is left invariant by exp(A) and is hence a rotation axis.
Using Rodrigues' rotation formula on matrix form with θ = θ⁄2 + θ⁄2, together with standard double angle formulae one obtains,
This is the matrix for a rotation around axis u by the angle θ in half-angle form. For full detail, see exponential map SO(3).
Notice that for infinitesimal angles second-order terms can be ignored and remains exp(A) = I + A
Skew-symmetric matrices over the field of real numbers form the tangent space to the real orthogonal group at the identity matrix; formally, the special orthogonal Lie algebra. In this sense, then, skew-symmetric matrices can be thought of as infinitesimal rotations.
Another way of saying this is that the space of skew-symmetric matrices forms the Lie algebra of the Lie group The Lie bracket on this space is given by the commutator:
It is easy to check that the commutator of two skew-symmetric matrices is again skew-symmetric:
The matrix exponential of a skew-symmetric matrix is then an orthogonal matrix :
The image of the exponential map of a Lie algebra always lies in the connected component of the Lie group that contains the identity element. In the case of the Lie group this connected component is the special orthogonal group consisting of all orthogonal matrices with determinant 1. So will have determinant +1. Moreover, since the exponential map of a connected compact Lie group is always surjective, it turns out that every orthogonal matrix with unit determinant can be written as the exponential of some skew-symmetric matrix. In the particular important case of dimension the exponential representation for an orthogonal matrix reduces to the well-known polar form of a complex number of unit modulus. Indeed, if a special orthogonal matrix has the form
with . Therefore, putting and it can be written
which corresponds exactly to the polar form of a complex number of unit modulus.
The exponential representation of an orthogonal matrix of order can also be obtained starting from the fact that in dimension any special orthogonal matrix can be written as where is orthogonal and S is a block diagonal matrix with blocks of order 2, plus one of order 1 if is odd; since each single block of order 2 is also an orthogonal matrix, it admits an exponential form. Correspondingly, the matrix S writes as exponential of a skew-symmetric block matrix of the form above, so that exponential of the skew-symmetric matrix Conversely, the surjectivity of the exponential map, together with the above-mentioned block-diagonalization for skew-symmetric matrices, implies the block-diagonalization for orthogonal matrices.
In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are traceless, Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.
In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors.
In advanced linear algebra, an invertible complex square matrix U is unitary if its matrix inverse U−1 equals its conjugate transpose U*, that is, if
In mechanics and geometry, the 3D rotation group, often denoted SO(3), is the group of all rotations about the origin of three-dimensional Euclidean space under the operation of composition.
In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition
In physics and mathematics, the Lorentz group is the group of all Lorentz transformations of Minkowski spacetime, the classical and quantum setting for all (non-gravitational) physical phenomena. The Lorentz group is named for the Dutch physicist Hendrik Lorentz.
Rotation in mathematics is a concept originating in geometry. Any rotation is a motion of a certain space that preserves at least one point. It can describe, for example, the motion of a rigid body around a fixed point. Rotation can have a sign (as in the sign of an angle): a clockwise rotation is a negative magnitude so a counterclockwise turn has a positive magnitude. A rotation is different from other types of motions: translations, which have no fixed points, and (hyperplane) reflections, each of them having an entire (n − 1)-dimensional flat of fixed points in a n-dimensional space.
In mathematics, the circle group, denoted by or , is the multiplicative group of all complex numbers with absolute value 1, that is, the unit circle in the complex plane or simply the unit complex numbers
In Euclidean geometry, two-dimensional rotations and reflections are two kinds of Euclidean plane isometries which are related to one another.
In linear algebra, linear transformations can be represented by matrices. If is a linear transformation mapping to and is a column vector with entries, then for some matrix , called the transformation matrix of . Note that has rows and columns, whereas the transformation is from to . There are alternative expressions of transformation matrices involving row vectors that are preferred by some authors.
In quantum mechanics and computing, the Bloch sphere is a geometrical representation of the pure state space of a two-level quantum mechanical system (qubit), named after the physicist Felix Bloch.
In linear algebra, a rotation matrix is a transformation matrix that is used to perform a rotation in Euclidean space. For example, using the convention below, the matrix
In geometry, Euler's rotation theorem states that, in three-dimensional space, any displacement of a rigid body such that a point on the rigid body remains fixed, is equivalent to a single rotation about some axis that runs through the fixed point. It also means that the composition of two rotations is also a rotation. Therefore the set of rotations has a group structure, known as a rotation group.
In mathematics, the Cayley transform, named after Arthur Cayley, is any of a cluster of related things. As originally described by Cayley (1846), the Cayley transform is a mapping between skew-symmetric matrices and special orthogonal matrices. The transform is a homography used in real analysis, complex analysis, and quaternionic analysis. In the theory of Hilbert spaces, the Cayley transform is a mapping between linear operators.
In the theory of three-dimensional rotation, Rodrigues' rotation formula, named after Olinde Rodrigues, is an efficient algorithm for rotating a vector in space, given an axis and angle of rotation. By extension, this can be used to transform all three basis vectors to compute a rotation matrix in SO(3), the group of all rotation matrices, from an axis–angle representation. In terms of Lie theory, the Rodrigues' formula provides an algorithm to compute the exponential map from the Lie algebra so(3) to its Lie group SO(3).
In cartography, a Tissot's indicatrix is a mathematical contrivance presented by French mathematician Nicolas Auguste Tissot in 1859 and 1871 in order to characterize local distortions due to map projection. It is the geometry that results from projecting a circle of infinitesimal radius from a curved geometric model, such as a globe, onto a map. Tissot proved that the resulting diagram is an ellipse whose axes indicate the two principal directions along which scale is maximal and minimal at that point on the map.
In linear algebra, an idempotent matrix is a matrix which, when multiplied by itself, yields itself. That is, the matrix is idempotent if and only if . For this product to be defined, must necessarily be a square matrix. Viewed this way, idempotent matrices are idempotent elements of matrix rings.
In geometry, various formalisms exist to express a rotation in three dimensions as a mathematical transformation. In physics, this concept is applied to classical mechanics where rotational kinematics is the science of quantitative description of a purely rotational motion. The orientation of an object at a given instant is described with the same tools, as it is defined as an imaginary rotation from a reference placement in space, rather than an actually observed rotation from a previous placement in space.
In mathematics, the axis–angle representation parameterizes a rotation in a three-dimensional Euclidean space by two quantities: a unit vector e indicating the direction of an axis of rotation, and an angle of rotation θ describing the magnitude and sense of the rotation about the axis. Only two numbers, not three, are needed to define the direction of a unit vector e rooted at the origin because the magnitude of e is constrained. For example, the elevation and azimuth angles of e suffice to locate it in any particular Cartesian coordinate frame.
In physics and engineering, Davenport chained rotations are three chained intrinsic rotations about body-fixed specific axes. Euler rotations and Tait–Bryan rotations are particular cases of the Davenport general rotation decomposition. The angles of rotation are called Davenport angles because the general problem of decomposing a rotation in a sequence of three was studied first by Paul B. Davenport.