Rigid transformation

Last updated

In mathematics, a rigid transformation (also called Euclidean transformation or Euclidean isometry) is a geometric transformation of a Euclidean space that preserves the Euclidean distance between every pair of points. [1] [ self-published source ] [2] [3]

Contents

The rigid transformations include rotations, translations, reflections, or any sequence of these. Reflections are sometimes excluded from the definition of a rigid transformation by requiring that the transformation also preserve the handedness of objects in the Euclidean space. (A reflection would not preserve handedness; for instance, it would transform a left hand into a right hand.) To avoid ambiguity, a transformation that preserves handedness is known as a rigid motion, a Euclidean motion, or a proper rigid transformation.

In dimension two, a rigid motion is either a translation or a rotation. In dimension three, every rigid motion can be decomposed as the composition of a rotation and a translation, and is thus sometimes called a rototranslation. In dimension three, all rigid motions are also screw motions (this is Chasles' theorem)

In dimension at most three, any improper rigid transformation can be decomposed into an improper rotation followed by a translation, or into a sequence of reflections.

Any object will keep the same shape and size after a proper rigid transformation.

All rigid transformations are examples of affine transformations. The set of all (proper and improper) rigid transformations is a mathematical group called the Euclidean group , denoted E(n) for n-dimensional Euclidean spaces. The set of rigid motions is called the special Euclidean group, and denoted SE(n).

In kinematics, rigid motions in a 3-dimensional Euclidean space are used to represent displacements of rigid bodies. According to Chasles' theorem, every rigid transformation can be expressed as a screw motion.

Formal definition

A rigid transformation is formally defined as a transformation that, when acting on any vector v, produces a transformed vector T(v) of the form

T(v) = Rv + t

where RT = R−1 (i.e., R is an orthogonal transformation), and t is a vector giving the translation of the origin.

A proper rigid transformation has, in addition,

det(R) = 1

which means that R does not produce a reflection, and hence it represents a rotation (an orientation-preserving orthogonal transformation). Indeed, when an orthogonal transformation matrix produces a reflection, its determinant is −1.

Distance formula

A measure of distance between points, or metric, is needed in order to confirm that a transformation is rigid. The Euclidean distance formula for Rn is the generalization of the Pythagorean theorem. The formula gives the distance squared between two points X and Y as the sum of the squares of the distances along the coordinate axes, that is

where X = (X1, X2, ..., Xn) and Y = (Y1, Y2, ..., Yn), and the dot denotes the scalar product.

Using this distance formula, a rigid transformation g : RnRn has the property,

Translations and linear transformations

A translation of a vector space adds a vector d to every vector in the space, which means it is the transformation

g(v) = v + d.

It is easy to show that this is a rigid transformation by showing that the distance between translated vectors equal the distance between the original vectors:

A linear transformation of a vector space, L : RnRn, preserves linear combinations,

A linear transformation L can be represented by a matrix, which means

L : v → [L]v,

where [L] is an n×n matrix.

A linear transformation is a rigid transformation if it satisfies the condition,

that is

Now use the fact that the scalar product of two vectors v.w can be written as the matrix operation vTw, where the T denotes the matrix transpose, we have

Thus, the linear transformation L is rigid if its matrix satisfies the condition

where [I] is the identity matrix. Matrices that satisfy this condition are called orthogonal matrices. This condition actually requires the columns of these matrices to be orthogonal unit vectors.

Matrices that satisfy this condition form a mathematical group under the operation of matrix multiplication called the orthogonal group of n×n matrices and denoted O(n).

Compute the determinant of the condition for an orthogonal matrix to obtain

which shows that the matrix [L] can have a determinant of either +1 or −1. Orthogonal matrices with determinant −1 are reflections, and those with determinant +1 are rotations. Notice that the set of orthogonal matrices can be viewed as consisting of two manifolds in Rn×n separated by the set of singular matrices.

The set of rotation matrices is called the special orthogonal group, and denoted SO(n). It is an example of a Lie group because it has the structure of a manifold.

See also

Related Research Articles

In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the linear map represented by the matrix is an isomorphism. The determinant of a product of matrices is the product of their determinants.

<span class="mw-page-title-main">Pauli matrices</span> Matrices important in quantum mechanics and the study of spin

In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices which are Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.

<span class="mw-page-title-main">Affine transformation</span> Geometric transformation that preserves lines but not angles nor the origin

In Euclidean geometry, an affine transformation or affinity is a geometric transformation that preserves lines and parallelism, but not necessarily Euclidean distances and angles.

In linear algebra, the trace of a square matrix A, denoted tr(A), is defined to be the sum of elements on the main diagonal of A. The trace is only defined for a square matrix.

In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors.

<span class="mw-page-title-main">Matrix multiplication</span> Mathematical operation in linear algebra

In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.

<span class="mw-page-title-main">Square matrix</span> Matrix with the same number of rows and columns

In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied.

<span class="mw-page-title-main">Orthogonal group</span> Type of group in mathematics

In mathematics, the orthogonal group in dimension , denoted , is the group of distance-preserving transformations of a Euclidean space of dimension that preserve a fixed point, where the group operation is given by composing transformations. The orthogonal group is sometimes called the general orthogonal group, by analogy with the general linear group. Equivalently, it is the group of orthogonal matrices, where the group operation is given by matrix multiplication. The orthogonal group is an algebraic group and a Lie group. It is compact.

In mechanics and geometry, the 3D rotation group, often denoted SO(3), is the group of all rotations about the origin of three-dimensional Euclidean space under the operation of composition.

In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition

<span class="mw-page-title-main">Rotation (mathematics)</span> Motion of a certain space that preserves at least one point

Rotation in mathematics is a concept originating in geometry. Any rotation is a motion of a certain space that preserves at least one point. It can describe, for example, the motion of a rigid body around a fixed point. Rotation can have a sign (as in the sign of an angle): a clockwise rotation is a negative magnitude so a counterclockwise turn has a positive magnitude. A rotation is different from other types of motions: translations, which have no fixed points, and (hyperplane) reflections, each of them having an entire (n − 1)-dimensional flat of fixed points in a n-dimensional space.

An infinitesimal rotation matrix or differential rotation matrix is a matrix representing an infinitely small rotation.

In linear algebra, linear transformations can be represented by matrices. If is a linear transformation mapping to and is a column vector with entries, then

In linear algebra, a rotation matrix is a transformation matrix that is used to perform a rotation in Euclidean space. For example, using the convention below, the matrix

<span class="mw-page-title-main">Euler's rotation theorem</span> Movement with a fixed point is rotation

In geometry, Euler's rotation theorem states that, in three-dimensional space, any displacement of a rigid body such that a point on the rigid body remains fixed, is equivalent to a single rotation about some axis that runs through the fixed point. It also means that the composition of two rotations is also a rotation. Therefore the set of rotations has a group structure, known as a rotation group.

<span class="mw-page-title-main">Screw theory</span> Mathematical formulation of vector pairs used in physics (rigid body dynamics)

Screw theory is the algebraic calculation of pairs of vectors, such as forces and moments or angular and linear velocity, that arise in the kinematics and dynamics of rigid bodies. The mathematical framework was developed by Sir Robert Stawell Ball in 1876 for application in kinematics and statics of mechanisms.

<span class="mw-page-title-main">Cartesian tensor</span>

In geometry and linear algebra, a Cartesian tensor uses an orthonormal basis to represent a tensor in a Euclidean space in the form of components. Converting a tensor's components from one such basis to another is done through an orthogonal transformation.

In geometry, a Euclidean plane isometry is an isometry of the Euclidean plane, or more informally, a way of transforming the plane that preserves geometrical properties such as length. There are four types: translations, rotations, reflections, and glide reflections.

In mathematics, a Euclidean distance matrix is an n×n matrix representing the spacing of a set of n points in Euclidean space. For points in k-dimensional space k, the elements of their Euclidean distance matrix A are given by squares of distances between them. That is

<span class="mw-page-title-main">Axis–angle representation</span> Parameterization of a rotation into a unit vector and angle

In mathematics, the axis–angle representation parameterizes a rotation in a three-dimensional Euclidean space by two quantities: a unit vector e indicating the direction (geometry) of an axis of rotation, and an angle of rotation θ describing the magnitude and sense of the rotation about the axis. Only two numbers, not three, are needed to define the direction of a unit vector e rooted at the origin because the magnitude of e is constrained. For example, the elevation and azimuth angles of e suffice to locate it in any particular Cartesian coordinate frame.

References

  1. O. Bottema & B. Roth (1990). Theoretical Kinematics. Dover Publications. reface. ISBN   0-486-66346-9.
  2. J. M. McCarthy (2013). Introduction to Theoretical Kinematics. MDA Press. reface.
  3. Galarza, Ana Irene Ramírez; Seade, José (2007), Introduction to classical geometries, Birkhauser