Scaling (geometry)

Last updated

Each iteration of the Sierpinski triangle contains triangles related to the next iteration by a scale factor of 1/2 Sierpinski triangle evolution.svg
Each iteration of the Sierpinski triangle contains triangles related to the next iteration by a scale factor of 1/2

In affine geometry, uniform scaling (or isotropic scaling [1] ) is a linear transformation that enlarges (increases) or shrinks (diminishes) objects by a scale factor that is the same in all directions. The result of uniform scaling is similar (in the geometric sense) to the original. A scale factor of 1 is normally allowed, so that congruent shapes are also classed as similar. Uniform scaling happens, for example, when enlarging or reducing a photograph, or when creating a scale model of a building, car, airplane, etc.

Contents

More general is scaling with a separate scale factor for each axis direction. Non-uniform scaling ( anisotropic scaling) is obtained when at least one of the scaling factors is different from the others; a special case is directional scaling or stretching (in one direction). Non-uniform scaling changes the shape of the object; e.g. a square may change into a rectangle, or into a parallelogram if the sides of the square are not parallel to the scaling axes (the angles between lines parallel to the axes are preserved, but not all angles). It occurs, for example, when a faraway billboard is viewed from an oblique angle, or when the shadow of a flat object falls on a surface that is not parallel to it.

When the scale factor is larger than 1, (uniform or non-uniform) scaling is sometimes also called dilation or enlargement. When the scale factor is a positive number smaller than 1, scaling is sometimes also called contraction or reduction.

In the most general sense, a scaling includes the case in which the directions of scaling are not perpendicular. It also includes the case in which one or more scale factors are equal to zero (projection), and the case of one or more negative scale factors (a directional scaling by -1 is equivalent to a reflection).

Scaling is a linear transformation, and a special case of homothetic transformation (scaling about a point). In most cases, the homothetic transformations are non-linear transformations.

Uniform scaling

Each iteration of the Sierpinski triangle contains triangles related to the next iteration by a scale factor of 1/2 Sierpinski triangle evolution.svg
Each iteration of the Sierpinski triangle contains triangles related to the next iteration by a scale factor of 1/2

A scale factor is usually a decimal which scales, or multiplies, some quantity. In the equation y = Cx, C is the scale factor for x. C is also the coefficient of x, and may be called the constant of proportionality of y to x. For example, doubling distances corresponds to a scale factor of two for distance, while cutting a cake in half results in pieces with a scale factor for volume of one half. The basic equation for it is image over preimage.

In the field of measurements, the scale factor of an instrument is sometimes referred to as sensitivity. The ratio of any two corresponding lengths in two similar geometric figures is also called a scale.

Matrix representation

A scaling can be represented by a scaling matrix. To scale an object by a vector v = (vx, vy, vz), each point p = (px, py, pz) would need to be multiplied with this scaling matrix:

As shown below, the multiplication will give the expected result:

Such a scaling changes the diameter of an object by a factor between the scale factors, the area by a factor between the smallest and the largest product of two scale factors, and the volume by the product of all three.

The scaling is uniform if and only if the scaling factors are equal (vx = vy = vz). If all except one of the scale factors are equal to 1, we have directional scaling.

In the case where vx = vy = vz = k, scaling increases the area of any surface by a factor of k2 and the volume of any solid object by a factor of k3.

Scaling in arbitrary dimensions

In -dimensional space , uniform scaling by a factor is accomplished by scalar multiplication with , that is, multiplying each coordinate of each point by . As a special case of linear transformation, it can be achieved also by multiplying each point (viewed as a column vector) with a diagonal matrix whose entries on the diagonal are all equal to , namely .

Non-uniform scaling is accomplished by multiplication with any symmetric matrix. The eigenvalues of the matrix are the scale factors, and the corresponding eigenvectors are the axes along which each scale factor applies. A special case is a diagonal matrix, with arbitrary numbers along the diagonal: the axes of scaling are then the coordinate axes, and the transformation scales along each axis by the factor .

In uniform scaling with a non-zero scale factor, all non-zero vectors retain their direction (as seen from the origin), or all have the direction reversed, depending on the sign of the scaling factor. In non-uniform scaling only the vectors that belong to an eigenspace will retain their direction. A vector that is the sum of two or more non-zero vectors belonging to different eigenspaces will be tilted towards the eigenspace with largest eigenvalue.

Using homogeneous coordinates

In projective geometry, often used in computer graphics, points are represented using homogeneous coordinates. To scale an object by a vector v = (vx, vy, vz), each homogeneous coordinate vector p = (px, py, pz, 1) would need to be multiplied with this projective transformation matrix:

As shown below, the multiplication will give the expected result:

Since the last component of a homogeneous coordinate can be viewed as the denominator of the other three components, a uniform scaling by a common factor s (uniform scaling) can be accomplished by using this scaling matrix:

For each vector p = (px, py, pz, 1) we would have

which would be equivalent to

Function dilation and contraction

Given a point , the dilation associates it with the point through the equations

for .

Therefore, given a function , the equation of the dilated function is

Particular cases

If , the transformation is horizontal; when , it is a dilation, when , it is a contraction.

If , the transformation is vertical; when it is a dilation, when , it is a contraction.

If or , the transformation is a squeeze mapping.

See also

Footnotes

  1. Durand; Cutler. "Transformations" (PowerPoint). Massachusetts Institute of Technology. Retrieved 12 September 2008.

Related Research Articles

<span class="mw-page-title-main">Lorentz transformation</span> Family of linear transformations

In physics, the Lorentz transformations are a six-parameter family of linear transformations from a coordinate frame in spacetime to another frame that moves at a constant velocity relative to the former. The respective inverse transformation is then parameterized by the negative of this velocity. The transformations are named after the Dutch physicist Hendrik Lorentz.

<span class="mw-page-title-main">2D computer graphics</span> Computer-based generation of digital images

2D computer graphics is the computer-based generation of digital images—mostly from two-dimensional models and by techniques specific to them. It may refer to the branch of computer science that comprises such techniques or to the models themselves.

<span class="mw-page-title-main">Affine transformation</span> Geometric transformation that preserves lines but not angles nor the origin

In Euclidean geometry, an affine transformation or affinity is a geometric transformation that preserves lines and parallelism, but not necessarily Euclidean distances and angles.

In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the transpose of . More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of

<span class="mw-page-title-main">Row and column spaces</span> Vector spaces associated to a matrix

In linear algebra, the column space of a matrix A is the span of its column vectors. The column space of a matrix is the image or range of the corresponding matrix transformation.

In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors.

<span class="mw-page-title-main">Singular value decomposition</span> Matrix decomposition

In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any matrix. It is related to the polar decomposition.

In mechanics and geometry, the 3D rotation group, often denoted SO(3), is the group of all rotations about the origin of three-dimensional Euclidean space under the operation of composition.

In vector calculus, the Jacobian matrix of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variables as input as the number of vector components of its output, its determinant is referred to as the Jacobian determinant. Both the matrix and the determinant are often referred to simply as the Jacobian in literature.

<span class="mw-page-title-main">Orthographic projection</span> Means of projecting three-dimensional objects in two dimensions

Orthographic projection is a means of representing three-dimensional objects in two dimensions. Orthographic projection is a form of parallel projection in which all the projection lines are orthogonal to the projection plane, resulting in every plane of the scene appearing in affine transformation on the viewing surface. The obverse of an orthographic projection is an oblique projection, which is a parallel projection in which the projection lines are not orthogonal to the projection plane.

<span class="mw-page-title-main">Lorentz group</span> Lie group of Lorentz transformations

In physics and mathematics, the Lorentz group is the group of all Lorentz transformations of Minkowski spacetime, the classical and quantum setting for all (non-gravitational) physical phenomena. The Lorentz group is named for the Dutch physicist Hendrik Lorentz.

<span class="mw-page-title-main">Projection (linear algebra)</span> Idempotent linear transformation from a vector space to itself

In linear algebra and functional analysis, a projection is a linear transformation from a vector space to itself such that . That is, whenever is applied twice to any vector, it gives the same result as if it were applied once. It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object.

In linear algebra, linear transformations can be represented by matrices. If is a linear transformation mapping to and is a column vector with entries, then

In linear algebra, a rotation matrix is a transformation matrix that is used to perform a rotation in Euclidean space. For example, using the convention below, the matrix

In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the linear subspace of the domain of the map which is mapped to the zero vector. That is, given a linear map L : VW between two vector spaces V and W, the kernel of L is the vector space of all elements v of V such that L(v) = 0, where 0 denotes the zero vector in W, or more symbolically:

<span class="mw-page-title-main">Back-face culling</span> Only rendering polygons facing towards the camera

In computer graphics, back-face culling determines whether a polygon of a graphical object is drawn. It is a step in the graphical pipeline that tests whether the points in the polygon appear in clockwise or counter-clockwise order when projected onto the screen. If the user has specified that front-facing polygons have a clockwise winding, but the polygon projected on the screen has a counter-clockwise winding then it has been rotated to face away from the camera and will not be drawn.

In linear algebra, it is often important to know which vectors have their directions unchanged by a linear transformation. An eigenvector or characteristic vector is such a vector. Thus an eigenvector of a linear transformation is scaled by a constant factor when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

The direct-quadrature-zerotransformation or zero-direct-quadraturetransformation is a tensor that rotates the reference frame of a three-element vector or a three-by-three element matrix in an effort to simplify analysis. The DQZ transform is the product of the Clarke transform and the Park transform, first proposed in 1929 by Robert H. Park.

<span class="mw-page-title-main">Derivations of the Lorentz transformations</span>

There are many ways to derive the Lorentz transformations utilizing a variety of physical principles, ranging from Maxwell's equations to Einstein's postulates of special relativity, and mathematical tools, spanning from elementary algebra and hyperbolic functions, to linear algebra and group theory.