Line–line intersection

Last updated
Two intersecting lines Line-Line Intersection.png
Two intersecting lines

In Euclidean geometry, the intersection of a line and a line can be the empty set, a point, or another line. Distinguishing these cases and finding the intersection have uses, for example, in computer graphics, motion planning, and collision detection.

Contents

In three-dimensional Euclidean geometry, if two lines are not in the same plane, they have no point of intersection [ citation needed ] and are called skew lines. If they are in the same plane, however, there are three possibilities: if they coincide (are not distinct lines), they have an infinitude of points in common (namely all of the points on either of them); if they are distinct but have the same slope, they are said to be parallel and have no points in common; otherwise, they have a single point of intersection.

The distinguishing features of non-Euclidean geometry are the number and locations of possible intersections between two lines and the number of possible lines with no intersections (parallel lines) with a given line.[ further explanation needed ]

Formulas

A necessary condition for two lines to intersect is that they are in the same planethat is, are not skew lines. Satisfaction of this condition is equivalent to the tetrahedron with vertices at two of the points on one line and two of the points on the other line being degenerate in the sense of having zero volume. For the algebraic form of this condition, see Skew lines § Testing for skewness.

Given two points on each line

First we consider the intersection of two lines L1 and L2 in two-dimensional space, with line L1 being defined by two distinct points (x1, y1) and (x2, y2), and line L2 being defined by two distinct points (x3, y3) and (x4, y4). [1]

The intersection P of line L1 and L2 can be defined using determinants.

The determinants can be written out as:

When the two lines are parallel or coincident, the denominator is zero.

Given two points on each line segment

The intersection point above is for the infinitely long lines defined by the points, rather than the line segments between the points, and can produce an intersection point not contained in either of the two line segments. In order to find the position of the intersection in respect to the line segments, we can define lines L1 and L2 in terms of first degree Bézier parameters:

(where t and u are real numbers). The intersection point of the lines is found with one of the following values of t or u, where

and

with

There will be an intersection if 0 ≤ t ≤ 1 and 0 ≤ u ≤ 1. The intersection point falls within the first line segment if 0 ≤ t ≤ 1, and it falls within the second line segment if 0 ≤ u ≤ 1. These inequalities can be tested without the need for division, allowing rapid determination of the existence of any line segment intersection before calculating its exact point. [2]

Given two line equations

The x and y coordinates of the point of intersection of two non-vertical lines can easily be found using the following substitutions and rearrangements.

Suppose that two lines have the equations y = ax + c and y = bx + d where a and b are the slopes (gradients) of the lines and where c and d are the y-intercepts of the lines. At the point where the two lines intersect (if they do), both y coordinates will be the same, hence the following equality:

We can rearrange this expression in order to extract the value of x,

and so,

To find the y coordinate, all we need to do is substitute the value of x into either one of the two line equations, for example, into the first:

Hence, the point of intersection is

Note that if a = b then the two lines are parallel and they do not intersect, unless c = d as well, in which case the lines are coincident and they intersect at every point.

Using homogeneous coordinates

By using homogeneous coordinates, the intersection point of two implicitly defined lines can be determined quite easily. In 2D, every point can be defined as a projection of a 3D point, given as the ordered triple (x, y, w). The mapping from 3D to 2D coordinates is (x′, y′) = (x/w, y/w). We can convert 2D points to homogeneous coordinates by defining them as (x, y, 1).

Assume that we want to find intersection of two infinite lines in 2-dimensional space, defined as a1x + b1y + c1 = 0 and a2x + b2y + c2 = 0. We can represent these two lines in line coordinates as U1 = (a1, b1, c1) and U2 = (a2, b2, c2). The intersection P of two lines is then simply given by [3]

If cp = 0, the lines do not intersect.

More than two lines

The intersection of two lines can be generalized to involve additional lines. The existence of and expression for the n-line intersection problem are as follows.

In two dimensions

In two dimensions, more than two lines almost certainly do not intersect at a single point. To determine if they do and, if so, to find the intersection point, write the ith equation (i = 1, …, n) as

and stack these equations into matrix form as

where the ith row of the n × 2 matrix A is [ai1, ai2], w is the 2 × 1 vector [x
y
]
, and the ith element of the column vector b is bi. If A has independent columns, its rank is 2. Then if and only if the rank of the augmented matrix [A | b] is also 2, there exists a solution of the matrix equation and thus an intersection point of the n lines. The intersection point, if it exists, is given by

where Ag is the Moore–Penrose generalized inverse of A (which has the form shown because A has full column rank). Alternatively, the solution can be found by jointly solving any two independent equations. But if the rank of A is only 1, then if the rank of the augmented matrix is 2 there is no solution but if its rank is 1 then all of the lines coincide with each other.

In three dimensions

The above approach can be readily extended to three dimensions. In three or more dimensions, even two lines almost certainly do not intersect; pairs of non-parallel lines that do not intersect are called skew lines. But if an intersection does exist it can be found, as follows.

In three dimensions a line is represented by the intersection of two planes, each of which has an equation of the form

Thus a set of n lines can be represented by 2n equations in the 3-dimensional coordinate vector w:

where now A is 2n × 3 and b is 2n × 1. As before there is a unique intersection point if and only if A has full column rank and the augmented matrix [A | b] does not, and the unique intersection if it exists is given by

Nearest points to skew lines

PQ, the shortest distance between two skew lines AB and CD is perpendicular to both AB and CD Skew lines shortest distance.svg
PQ, the shortest distance between two skew lines AB and CD is perpendicular to both AB and CD

In two or more dimensions, we can usually find a point that is mutually closest to two or more lines in a least-squares sense.

In two dimensions

In the two-dimensional case, first, represent line i as a point pi on the line and a unit normal vector i, perpendicular to that line. That is, if x1 and x2 are points on line 1, then let p1 = x1 and let

which is the unit vector along the line, rotated by a right angle.

The distance from a point x to the line (p, ) is given by

And so the squared distance from a point x to a line is

The sum of squared distances to many lines is the cost function:

This can be rearranged:

To find the minimum, we differentiate with respect to x and set the result equal to the zero vector:

so

and so

In more than two dimensions

While i is not well-defined in more than two dimensions, this can be generalized to any number of dimensions by noting that iiT is simply the symmetric matrix with all eigenvalues unity except for a zero eigenvalue in the direction along the line providing a seminorm on the distance between pi and another point giving the distance to the line. In any number of dimensions, if i is a unit vector along the ith line, then

becomes

where I is the identity matrix, and so [4]

General derivation

In order to find the intersection point of a set of lines, we calculate the point with minimum distance to them. Each line is defined by an origin ai and a unit direction vector i. The square of the distance from a point p to one of the lines is given from Pythagoras:

where (pai)Ti is the projection of pai on line i. The sum of distances to the square to all lines is

To minimize this expression, we differentiate it with respect to p.

which results in

where I is the identity matrix. This is a matrix Sp = C, with solution p = S+C, where S+ is the pseudo-inverse of S.

Non-Euclidean geometry

From left to right: Euclidean geometry, spherical geometry, and hyperbolic geometry Euclidian and non euclidian geometry.png
From left to right: Euclidean geometry, spherical geometry, and hyperbolic geometry

In spherical geometry, any two great circles intersect. [5]

In hyperbolic geometry, given any line and any point, there are infinitely many lines through that point that do not intersect the given line. [5]

See also

Related Research Articles

<span class="mw-page-title-main">Pauli matrices</span> Matrices important in quantum mechanics and the study of spin

In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.

<span class="mw-page-title-main">System of linear equations</span> Several equations of degree 1 to be solved simultaneously

In mathematics, a system of linear equations is a collection of two or more linear equations involving the same variables. For example,

<span class="mw-page-title-main">Moment of inertia</span> Scalar measure of the rotational inertia with respect to a fixed axis of rotation

The moment of inertia, otherwise known as the mass moment of inertia, angular/rotational mass, second moment of mass, or most accurately, rotational inertia, of a rigid body is a quantity that determines the torque needed for a desired angular acceleration about a rotational axis, akin to how mass determines the force needed for a desired acceleration. It depends on the body's mass distribution and the axis chosen, with larger moments requiring more torque to change the body's rate of rotation by a given amount.

In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-sides of the equations. It is named after Gabriel Cramer, who published the rule for an arbitrary number of unknowns in 1750, although Colin Maclaurin also published special cases of the rule in 1748, and possibly knew of it as early as 1729.

<span class="mw-page-title-main">Covariance matrix</span> Measure of covariance of components of a random vector

In probability theory and statistics, a covariance matrix is a square matrix giving the covariance between each pair of elements of a given random vector.

In the mathematical field of differential geometry, a metric tensor is an additional structure on a manifold M that allows defining distances and angles, just as the inner product on a Euclidean space allows defining distances and angles there. More precisely, a metric tensor at a point p of M is a bilinear form defined on the tangent space at p, and a metric field on M consists of a metric tensor at each point p of M that varies smoothly with p.

In linear algebra, the adjugate of a square matrix A is the transpose of its cofactor matrix and is denoted by adj(A). It is also occasionally known as adjunct matrix, or "adjoint", though the latter term today normally refers to a different concept, the adjoint operator which for a matrix is the conjugate transpose.

In linear algebra, a square matrix with complex entries is said to be skew-Hermitian or anti-Hermitian if its conjugate transpose is the negative of the original matrix. That is, the matrix is skew-Hermitian if it satisfies the relation

<span class="mw-page-title-main">Projection (linear algebra)</span> Idempotent linear transformation from a vector space to itself

In linear algebra and functional analysis, a projection is a linear transformation from a vector space to itself such that . That is, whenever is applied twice to any vector, it gives the same result as if it were applied once. It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object.

In linear algebra, a rotation matrix is a transformation matrix that is used to perform a rotation in Euclidean space. For example, using the convention below, the matrix

In geometry, Plücker coordinates, introduced by Julius Plücker in the 19th century, are a way to assign six homogeneous coordinates to each line in projective 3-space, . Because they satisfy a quadratic constraint, they establish a one-to-one correspondence between the 4-dimensional space of lines in and points on a quadric in . A predecessor and special case of Grassmann coordinates, Plücker coordinates arise naturally in geometric algebra. They have proved useful for computer graphics, and also can be extended to coordinates for the screws and wrenches in the theory of kinematics used for robot control.

Screw theory is the algebraic calculation of pairs of vectors, such as angular and linear velocity, or forces and moments, that arise in the kinematics and dynamics of rigid bodies.

<span class="mw-page-title-main">Conjugate gradient method</span> Mathematical optimization algorithm

In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-semidefinite. The conjugate gradient method is often implemented as an iterative algorithm, applicable to sparse systems that are too large to be handled by a direct implementation or other direct methods such as the Cholesky decomposition. Large sparse systems often arise when numerically solving partial differential equations or optimization problems.

The Rayleigh–Ritz method is a direct numerical method of approximating eigenvalues, originated in the context of solving physical boundary value problems and named after Lord Rayleigh and Walther Ritz.

In mathematics, a moment matrix is a special symmetric square matrix whose rows and columns are indexed by monomials. The entries of the matrix depend on the product of the indexing monomials only

In geometry, various formalisms exist to express a rotation in three dimensions as a mathematical transformation. In physics, this concept is applied to classical mechanics where rotational kinematics is the science of quantitative description of a purely rotational motion. The orientation of an object at a given instant is described with the same tools, as it is defined as an imaginary rotation from a reference placement in space, rather than an actually observed rotation from a previous placement in space.

In statistics, Bayesian multivariate linear regression is a Bayesian approach to multivariate linear regression, i.e. linear regression where the predicted outcome is a vector of correlated random variables rather than a single scalar random variable. A more general treatment of this approach can be found in the article MMSE estimator.

Common integrals in quantum field theory are all variations and generalizations of Gaussian integrals to the complex plane and to multiple dimensions. Other integrals can be approximated by versions of the Gaussian integral. Fourier integrals are also considered.

Geometric algebra is an extension of vector algebra, providing additional algebraic structures on vector spaces, with geometric interpretations.

Linear least squares (LLS) is the least squares approximation of linear functions to data. It is a set of formulations for solving statistical problems involved in linear regression, including variants for ordinary (unweighted), weighted, and generalized (correlated) residuals. Numerical methods for linear least squares include inverting the matrix of the normal equations and orthogonal decomposition methods.

References

  1. Weisstein, Eric W. "Line-Line Intersection". MathWorld . Retrieved 2008-01-10.
  2. Antonio, Franklin (1992). "Chapter IV.6: Faster Line Segment Intersection". In Kirk, David (ed.). Graphics Gems III. Academic Press, Inc. pp. 199–202. ISBN   0-12-059756-X.
  3. Birchfield, Stanley (1998-04-23). "Homogeneous coordinates". robotics.stanford.edu. Archived from the original on 2000-09-29. Retrieved 2015-08-18.
  4. Traa, Johannes (2013). "Least-Squares Intersection of Lines" (PDF). cal.cs.illinois.edu. Archived from the original (PDF) on 2017-09-12. Retrieved 2018-08-30.
  5. 1 2 "Exploring Hyperbolic Space" (PDF). math.berkeley.edu. Retrieved 2022-06-03.