Triangulation (computer vision)

Last updated

In computer vision, triangulation refers to the process of determining a point in 3D space given its projections onto two, or more, images. In order to solve this problem it is necessary to know the parameters of the camera projection function from 3D to 2D for the cameras involved, in the simplest case represented by the camera matrices. Triangulation is sometimes also referred to as reconstruction or intersection.

Contents

The triangulation problem is in principle trivial. Since each point in an image corresponds to a line in 3D space, all points on the line in 3D are projected to the point in the image. If a pair of corresponding points in two, or more images, can be found it must be the case that they are the projection of a common 3D point x. The set of lines generated by the image points must intersect at x (3D point) and the algebraic formulation of the coordinates of x (3D point) can be computed in a variety of ways, as is presented below.

In practice, however, the coordinates of image points cannot be measured with arbitrary accuracy. Instead, various types of noise, such as geometric noise from lens distortion or interest point detection error, lead to inaccuracies in the measured image coordinates. As a consequence, the lines generated by the corresponding image points do not always intersect in 3D space. The problem, then, is to find a 3D point which optimally fits the measured image points. In the literature there are multiple proposals for how to define optimality and how to find the optimal 3D point. Since they are based on different optimality criteria, the various methods produce different estimates of the 3D point x when noise is involved.

Introduction

In the following, it is assumed that triangulation is made on corresponding image points from two views generated by pinhole cameras.

The ideal case of epipolar geometry. A 3D point x is projected onto two camera images through lines (green) which intersect with each camera's focal point, O1 and O2. The resulting image points are y1 and y2. The green lines intersect at x. TriangulationIdeal.svg
The ideal case of epipolar geometry. A 3D point x is projected onto two camera images through lines (green) which intersect with each camera's focal point, O1 and O2. The resulting image points are y1 and y2. The green lines intersect at x.
In practice, the image points y1 and y2 cannot be measured with arbitrary accuracy. Instead points y'1 and y'2 are detected and used for the triangulation. The corresponding projection lines (blue) do not, in general, intersect in 3D space and may also not intersect with point x. TriangulationReal.svg
In practice, the image points y1 and y2 cannot be measured with arbitrary accuracy. Instead points y'1 and y'2 are detected and used for the triangulation. The corresponding projection lines (blue) do not, in general, intersect in 3D space and may also not intersect with point x.

The image to the left illustrates the epipolar geometry of a pair of stereo cameras of pinhole model. A point x (3D point) in 3D space is projected onto the respective image plane along a line (green) which goes through the camera's focal point, and , resulting in the two corresponding image points and . If and are given and the geometry of the two cameras are known, the two projection lines (green lines) can be determined and it must be the case that they intersect at point x (3D point). Using basic linear algebra that intersection point can be determined in a straightforward way.

The image to the right shows the real case. The position of the image points and cannot be measured exactly. The reason is a combination of factors such as

As a consequence, the measured image points are and instead of and . However, their projection lines (blue) do not have to intersect in 3D space or come close to x. In fact, these lines intersect if and only if and satisfy the epipolar constraint defined by the fundamental matrix. Given the measurement noise in and it is rather likely that the epipolar constraint is not satisfied and the projection lines do not intersect.

This observation leads to the problem which is solved in triangulation. Which 3D point xest is the best estimate of x given and and the geometry of the cameras? The answer is often found by defining an error measure which depends on xest and then minimizing this error. In the following sections, some of the various methods for computing xest presented in the literature are briefly described.

All triangulation methods produce xest = x in the case that and , that is, when the epipolar constraint is satisfied (except for singular points, see below). It is what happens when the constraint is not satisfied which differs between the methods.

Properties

A triangulation method can be described in terms of a function such that

where are the homogeneous coordinates of the detected image points and are the camera matrices. x (3D point) is the homogeneous representation of the resulting 3D point. The sign implies that is only required to produce a vector which is equal to x up to a multiplication by a non-zero scalar since homogeneous vectors are involved.

Before looking at the specific methods, that is, specific functions , there are some general concepts related to the methods that need to be explained. Which triangulation method is chosen for a particular problem depends to some extent on these characteristics.

Singularities

Some of the methods fail to correctly compute an estimate of x (3D point) if it lies in a certain subset of the 3D space, corresponding to some combination of . A point in this subset is then a singularity of the triangulation method. The reason for the failure can be that some equation system to be solved is under-determined or that the projective representation of xest becomes the zero vector for the singular points.

Invariance

In some applications, it is desirable that the triangulation is independent of the coordinate system used to represent 3D points; if the triangulation problem is formulated in one coordinate system and then transformed into another the resulting estimate xest should transform in the same way. This property is commonly referred to as invariance. Not every triangulation method assures invariance, at least not for general types of coordinate transformations.

For a homogeneous representation of 3D coordinates, the most general transformation is a projective transformation, represented by a matrix . If the homogeneous coordinates are transformed according to

then the camera matrices must transform as (Ck)

to produce the same homogeneous image coordinates (yk)

If the triangulation function is invariant to then the following relation must be valid

from which follows that

  for all

For each triangulation method, it can be determined if this last relation is valid. If it is, it may be satisfied only for a subset of the projective transformations, for example, rigid or affine transformations.

Computational complexity

The function is only an abstract representation of a computation which, in practice, may be relatively complex. Some methods result in a which is a closed-form continuous function while others need to be decomposed into a series of computational steps involving, for example, SVD or finding the roots of a polynomial. Yet another class of methods results in which must rely on iterative estimation of some parameters. This means that both the computation time and the complexity of the operations involved may vary between the different methods.

Methods

Mid-point method

Each of the two image points and has a corresponding projection line (blue in the right image above), here denoted as and , which can be determined given the camera matrices . Let be a distance function between a (3D line) L and a x (3D point) such that is the Euclidean distance between and . The midpoint method finds the point xest which minimizes

It turns out that xest lies exactly at the middle of the shortest line segment which joins the two projection lines.

Direct linear transformation

Via the essential matrix

The problem to be solved there is how to compute given corresponding normalized image coordinates and . If the essential matrix is known and the corresponding rotation and translation transformations have been determined, this algorithm (described in Longuet-Higgins' paper) provides a solution.

Let denote row k of the rotation matrix :

Combining the above relations between 3D coordinates in the two coordinate systems and the mapping between 3D and 2D points described earlier gives

or

Once is determined, the other two coordinates can be computed as

The above derivation is not unique. It is also possible to start with an expression for and derive an expression for according to

In the ideal case, when the camera maps the 3D points according to a perfect pinhole camera and the resulting 2D points can be detected without any noise, the two expressions for are equal. In practice, however, they are not and it may be advantageous to combine the two estimates of , for example, in terms of some sort of average.

There are also other types of extensions of the above computations which are possible. They started with an expression of the primed image coordinates and derived 3D coordinates in the unprimed system. It is also possible to start with unprimed image coordinates and obtain primed 3D coordinates, which finally can be transformed into unprimed 3D coordinates. Again, in the ideal case the result should be equal to the above expressions, but in practice they may deviate.

A final remark relates to the fact that if the essential matrix is determined from corresponding image coordinate, which often is the case when 3D points are determined in this way, the translation vector is known only up to an unknown positive scaling. As a consequence, the reconstructed 3D points, too, are undetermined with respect to a positive scaling.

See also

Related Research Articles

<span class="mw-page-title-main">Four-vector</span> 4-dimensional vector in relativity

In special relativity, a four-vector is an object with four components, which transform in a specific way under Lorentz transformations. Specifically, a four-vector is an element of a four-dimensional vector space considered as a representation space of the standard representation of the Lorentz group, the representation. It differs from a Euclidean vector in how its magnitude is determined. The transformations that preserve this magnitude are the Lorentz transformations, which include spatial rotations and boosts.

<span class="mw-page-title-main">3D projection</span> Design technique

A 3D projection is a design technique used to display a three-dimensional (3D) object on a two-dimensional (2D) surface. These projections rely on visual perspective and aspect analysis to project a complex object for viewing capability on a simpler plane.

<span class="mw-page-title-main">Active and passive transformation</span> Distinction between meanings of Euclidean space transformations

Geometric transformations can be distinguished into two types: active or alibi transformations which change the physical position of a set of points relative to a fixed frame of reference or coordinate system ; and passive or alias transformations which leave points fixed but change the frame of reference or coordinate system relative to which they are described. By transformation, mathematicians usually refer to active transformations, while physicists and engineers could mean either.

In mathematics, the covariant derivative is a way of specifying a derivative along tangent vectors of a manifold. Alternatively, the covariant derivative is a way of introducing and working with a connection on a manifold by means of a differential operator, to be contrasted with the approach given by a principal connection on the frame bundle – see affine connection. In the special case of a manifold isometrically embedded into a higher-dimensional Euclidean space, the covariant derivative can be viewed as the orthogonal projection of the Euclidean directional derivative onto the manifold's tangent space. In this case the Euclidean derivative is broken into two parts, the extrinsic normal component and the intrinsic covariant derivative component.

<span class="mw-page-title-main">Shear stress</span> Component of stress coplanar with a material cross section

Shear stress is the component of stress coplanar with a material cross section. It arises from the shear force, the component of force vector parallel to the material cross section. Normal stress, on the other hand, arises from the force vector component perpendicular to the material cross section on which it acts.

<span class="mw-page-title-main">Barycentric coordinate system</span> Coordinate system that is defined by points instead of vectors

In geometry, a barycentric coordinate system is a coordinate system in which the location of a point is specified by reference to a simplex. The barycentric coordinates of a point can be interpreted as masses placed at the vertices of the simplex, such that the point is the center of mass of these masses. These masses can be zero or negative; they are all positive if and only if the point is inside the simplex.

<span class="mw-page-title-main">Chiral model</span> Model of mesons in the massless quark limit

In nuclear physics, the chiral model, introduced by Feza Gürsey in 1960, is a phenomenological model describing effective interactions of mesons in the chiral limit (where the masses of the quarks go to zero), but without necessarily mentioning quarks at all. It is a nonlinear sigma model with the principal homogeneous space of a Lie group as its target manifold. When the model was originally introduced, this Lie group was the SU(N), where N is the number of quark flavors. The Riemannian metric of the target manifold is given by a positive constant multiplied by the Killing form acting upon the Maurer–Cartan form of SU(N).

<span class="mw-page-title-main">Osculating circle</span> Circle of immediate corresponding curvature of a curve at a point

An osculating circle is a circle that best approximates the curvature of a curve at a specific point. It is tangent to the curve at that point and has the same curvature as the curve at that point. The osculating circle provides a way to understand the local behavior of a curve and is commonly used in differential geometry and calculus.

<span class="mw-page-title-main">Three-dimensional space</span> Geometric model of the physical space

In geometry, a three-dimensional space is a mathematical space in which three values (coordinates) are required to determine the position of a point. Most commonly, it is the three-dimensional Euclidean space, that is, the Euclidean space of dimension three, which models physical space. More general three-dimensional spaces are called 3-manifolds. The term may also refer colloquially to a subset of space, a three-dimensional region, a solid figure.

In computer vision, the motion field is an ideal representation of motion in three-dimensional space (3D) as it is projected onto a camera image. Given a simplified camera model, each point in the image is the projection of some point in the 3D scene but the position of the projection of a fixed point in space can vary with time. The motion field can formally be defined as the time derivative of the image position of all image points given that they correspond to fixed 3D points. This means that the motion field can be represented as a function which maps image coordinates to a 2-dimensional vector. The motion field is an ideal description of the projected 3D motion in the sense that it can be formally defined but in practice it is normally only possible to determine an approximation of the motion field from the image data.

<span class="mw-page-title-main">Epipolar geometry</span> Geometry of stereo vision

Epipolar geometry is the geometry of stereo vision. When two cameras view a 3D scene from two distinct positions, there are a number of geometric relations between the 3D points and their projections onto the 2D images that lead to constraints between the image points. These relations are derived based on the assumption that the cameras can be approximated by the pinhole camera model.

In computer vision, the essential matrix is a matrix, that relates corresponding points in stereo images assuming that the cameras satisfy the pinhole camera model.

The derivation of the Navier–Stokes equations as well as their application and formulation for different families of fluids, is an important exercise in fluid dynamics with applications in mechanical engineering, physics, chemistry, heat transfer, and electrical engineering. A proof explaining the properties and bounds of the equations, such as Navier–Stokes existence and smoothness, is one of the important unsolved problems in mathematics.

In computer vision a camera matrix or (camera) projection matrix is a matrix which describes the mapping of a pinhole camera from 3D points in the world to 2D points in an image.

<span class="mw-page-title-main">Pinhole camera model</span> Model of 3D points projected onto planar image via a lens-less aperture

The pinhole camera model describes the mathematical relationship between the coordinates of a point in three-dimensional space and its projection onto the image plane of an ideal pinhole camera, where the camera aperture is described as a point and no lenses are used to focus light. The model does not include, for example, geometric distortions or blurring of unfocused objects caused by lenses and finite sized apertures. It also does not take into account that most practical cameras have only discrete image coordinates. This means that the pinhole camera model can only be used as a first order approximation of the mapping from a 3D scene to a 2D image. Its validity depends on the quality of the camera and, in general, decreases from the center of the image to the edges as lens distortion effects increase.

The eight-point algorithm is an algorithm used in computer vision to estimate the essential matrix or the fundamental matrix related to a stereo camera pair from a set of corresponding image points. It was introduced by Christopher Longuet-Higgins in 1981 for the case of the essential matrix. In theory, this algorithm can be used also for the fundamental matrix, but in practice the normalized eight-point algorithm, described by Richard Hartley in 1997, is better suited for this case.

<span class="mw-page-title-main">Bundle adjustment</span> Technique in photogrammetry and computer vision

In photogrammetry and computer stereo vision, bundle adjustment is simultaneous refining of the 3D coordinates describing the scene geometry, the parameters of the relative motion, and the optical characteristics of the camera(s) employed to acquire the images, given a set of images depicting a number of 3D points from different viewpoints. Its name refers to the geometrical bundles of light rays originating from each 3D feature and converging on each camera's optical center, which are adjusted optimally according to an optimality criterion involving the corresponding image projections of all points.

The Cauchy momentum equation is a vector partial differential equation put forth by Cauchy that describes the non-relativistic momentum transport in any continuum.

<span class="mw-page-title-main">Instant centre of rotation</span> Point fixed to a body undergoing planar movement

The instant center of rotation of a body undergoing planar movement is a point that has zero velocity at a particular instant of time. At this instant, the velocity vectors of the other points in the body generate a circular field around this center of rotation which is identical to what is generated by a pure rotation.

In mathematics, the moduli stack of elliptic curves, denoted as or , is an algebraic stack over classifying elliptic curves. Note that it is a special case of the moduli stack of algebraic curves . In particular its points with values in some field correspond to elliptic curves over the field, and more generally morphisms from a scheme to it correspond to elliptic curves over . The construction of this space spans over a century because of the various generalizations of elliptic curves as the field has developed. All of these generalizations are contained in .

References