Chessboard detection

Last updated

Chessboards arise frequently in computer vision theory and practice because their highly structured geometry is well-suited for algorithmic detection and processing. The appearance of chessboards in computer vision can be divided into two main areas: camera calibration and feature extraction. This article provides a unified discussion of the role that chessboards play in the canonical methods from these two areas, including references to the seminal literature, examples, and pointers to software implementations.

Contents

Chessboard camera calibration

A classical problem in computer vision is three-dimensional (3D) reconstruction, where one seeks to infer 3D structure about a scene from two-dimensional (2D) images of it. [1] Practical cameras are complex devices, and photogrammetry is needed to model the relationship between image sensor measurements and the 3D world. In the standard pinhole camera model, one models the relationship between world coordinates and image (pixel) coordinates via the perspective transformation

where is the projective space of dimension .

In this setting, camera calibration is the process of estimating the parameters of the matrix of the perspective model. Camera calibration is an important step in the computer vision pipeline because many subsequent algorithms require knowledge of camera parameters as input. [2] Chessboards are often used during camera calibration because they are simple to construct, and their planar grid structure defines many natural interest points in an image. The following two methods are classic calibration techniques that often employ chessboards.

Direct linear transformation

Direct linear transformation (DLT) calibration uses correspondences between world points and camera image points to estimate camera parameters. In particular, DLT calibration exploits the fact that the perspective pinhole camera model defines a set of similarity relations that can be solved via the direct linear transformation algorithm. [3] To employ this approach, one requires accurate coordinates of a non-degenerate set of points in 3D space. A common way to achieve this is to construct a camera calibration rig (example below) built from three mutually perpendicular chessboards. Since the corners of each square are equidistant, it is straightforward to compute the 3D coordinates of each corner given the width of each square. The advantage of DLT calibration is its simplicity; arbitrary cameras can be calibrated by solving a single homogeneous linear system. However, the practical use of DLT calibration is limited by the necessity of a 3D calibration rig and the fact that extremely accurate 3D coordinates are required to avoid numerical instability. [1]

Example: calibration rig
Chessboard calibration setup.png
3D calibration rig built from three mutually perpendicular chessboards

Multiplane calibration

Multiplane calibration is a variant of camera auto-calibration that allows one to compute the parameters of a camera from two or more views of a planar surface. The seminal work in multiplane calibration is due to Zhang. [4] Zhang's method calibrates cameras by solving a particular homogeneous linear system that captures the homographic relationships between multiple perspective views of the same plane. This multiview approach is popular because, in practice, it is more natural to capture multiple views of a single planar surface - like a chessboard - than to construct a precise 3D calibration rig, as required by DLT calibration. The following figures demonstrate a practical application of multiplane camera calibration from multiple views of a chessboard. [5]

Example: multiplane calibration
Multiple chessboard views.png
Multiple views of a chessboard for multiplane calibration
Reconstructed boards camera.png
Reconstructed orientations
(camera-centric coordinates)
Reconstructed boards world.png
Reconstructed orientations
(world-centric coordinates)

Chessboard feature extraction

The second context in which chessboards arise in computer vision is to demonstrate several canonical feature extraction algorithms. In feature extraction, one seeks to identify image interest points, which summarize the semantic content of an image and, hence, offer a reduced dimensionality representation of one's data. [2] Chessboards - in particular - are often used to demonstrate feature extraction algorithms because their regular geometry naturally exhibits local image features like edges, lines, and corners. The following sections demonstrate the application of common feature extraction algorithms to a chessboard image.

Corners

Corners are a natural local image feature exploited in many computer vision systems. Loosely speaking, one can define a corner as the intersection of two edges. A variety of corner detection algorithms exist that formalize this notion into concrete algorithms. Corners are a useful image feature because they are necessarily distinct from their neighboring pixels. The Harris corner detector is a standard algorithm for corner detection in computer vision. [6] The algorithm works by analyzing the eigenvalues of the 2D discrete structure tensor matrix at each image pixel and flagging a pixel as a corner when the eigenvalues of its structure tensor are sufficiently large. Intuitively, the eigenvalues of the structure tensor matrix associated with a given pixel describe the gradient strength in a neighborhood of that pixel. As such, a structure tensor matrix with large eigenvalues corresponds to an image neighborhood with large gradients in orthogonal directions - i.e., a corner.

A chessboard contains natural corners at the boundaries between board squares, so one would expect corner detection algorithms to successfully detect them in practice. Indeed, the following figure demonstrates Harris corner detection applied to a perspective-transformed chessboard image. Clearly, the Harris detector is able to accurately detect the corners of the board.

Example: corner detection
Perspective chessboard.png
Perspective-transformed chessboard image

Lines

Lines are another natural local image feature exploited in many computer vision systems. Geometrically, the set of all lines in a 2D image can be parametrized by polar coordinates describing the distance and angle, respectively, of their normal vectors with respect to the origin. The discrete Hough transform exploits this idea by transforming a spatial image into a matrix in -space whose -th entry counts the number of image edge points that lie on the line parametrized by . [7] [8] [9] As such, one can detect lines in an image by simply searching for local maxima of its discrete Hough transform.

The grid structure of a chessboard naturally defines two sets of parallel lines in an image of it. Therefore, one expects that line detection algorithms should successfully detect these lines in practice. Indeed, the following figure demonstrates Hough transform-based line detection applied to a perspective-transformed chessboard image. Clearly, the Hough transform is able to accurately detect the lines induced by the board squares.

Example: line detection
Perspective chessboard.png
Perspective-transformed chessboard image
Perspective chessboard edges.png
Canny edge detector applied to chessboard image
Perspective chessboard hough transform.png
Hough transform of edge image with 19 largest local maxima denoted
Perspective chessboard detected lines.png
Lines parameterized by Hough transform local maxima

The following MATLAB code generates the above images using the Image Processing Toolbox:

% Load imageI=imread('Perspective_chessboard.png');% Compute edge imageBW=edge(I,'canny');% Compute Hough transform[Hthetarho]=hough(BW);% Find local maxima of Hough transformnumpeaks=19;thresh=ceil(0.1*max(H(:)));P=houghpeaks(H,numpeaks,'threshold',thresh);% Extract image lineslines=houghlines(BW,theta,rho,P,'FillGap',50,'MinLength',60);% --------------------------------------------------------------------------% Display results% --------------------------------------------------------------------------% Original imagefigure;imshow(I);% Edge imagefigure;imshow(BW);% Hough transformfigure;image(theta,rho,imadjust(mat2gray(H)),'CDataMapping','scaled');holdon;colormap(gray(256));plot(theta(P(:,2)),rho(P(:,1)),'o','color','r');% Detected linesfigure;imshow(I);holdon;n=size(I,2);fork=1:length(lines)% Overlay kth linex=[lines(k).point1(1)lines(k).point2(1)];y=[lines(k).point1(2)lines(k).point2(2)];line=@(z)((y(2)-y(1))/(x(2)-x(1)))*(z-x(1))+y(1);plot([1n],line([1n]),'Color','r');end

See also

Further reading

  1. M. Rufli, D. Scaramuzza, and R. Siegwart. "Automatic detection of checkerboards on blurred and distorted images." IEEE/RSJ International Conference on Intelligent Robots and Systems. (2008).
  2. Z. Weixing, et al. "A fast and accurate algorithm for chessboard corner detection." 2nd International Congress on Image and Signal Processing. (2009).
  3. A. De la Escalera and J. Armingol. "Automatic chessboard detection for intrinsic and extrinsic camera parameter calibration." Sensors. vol. 10(3), pp. 2027–2044 (2010).
  4. S. Bennett and J. Lasenby. "ChESS - quick and robust detection of chess-board features." Computer Vision and Image Understanding. vol. 118, pp. 197–210 (2014).
  5. J. Ha. "Automatic detection of chessboard and its applications." Opt. Eng. vol. 48(6) (2009).
  6. F. Zhao, et al. "An automated x-corner detection algorithm (axda)." Journal of Software. vol. 6(5), pp. 791–797 (2011).
  7. S. Arca, E. Casiraghi, and G. Lombardi. "Corner localization in chessboards for camera calibration." IADAT. (2005).
  8. X. Hu, P. Du, and Y. Zhou. "Automatic corner detection of chess board for medical endoscopy camera calibration." Proceedings of the 10th International Conference on Virtual Reality Continuum and Its Applications in Industry. ACM. (2011).
  9. S. Malek, et al. "Tracking chessboard corners using projective transformation for augmented reality. International Conference on Communications, Computing and Control Applications. (2011).

Related Research Articles

<span class="mw-page-title-main">3D projection</span> Design technique

A 3D projection is a design technique used to display a three-dimensional (3D) object on a two-dimensional (2D) surface. These projections rely on visual perspective and aspect analysis to project a complex object for viewing capability on a simpler plane.

Edge detection includes a variety of mathematical methods that aim at identifying edges, defined as curves in a digital image at which the image brightness changes sharply or, more formally, has discontinuities. The same problem of finding discontinuities in one-dimensional signals is known as step detection and the problem of finding signal discontinuities over time is known as change detection. Edge detection is a fundamental tool in image processing, machine vision and computer vision, particularly in the areas of feature detection and feature extraction.

The Hough transform is a feature extraction technique used in image analysis, computer vision, and digital image processing. The purpose of the technique is to find imperfect instances of objects within a certain class of shapes by a voting procedure. This voting procedure is carried out in a parameter space, from which object candidates are obtained as local maxima in a so-called accumulator space that is explicitly constructed by the algorithm for computing the Hough transform.

<span class="mw-page-title-main">Canny edge detector</span> Image edge detection algorithm

The Canny edge detector is an edge detection operator that uses a multi-stage algorithm to detect a wide range of edges in images. It was developed by John F. Canny in 1986. Canny also produced a computational theory of edge detection explaining why the technique works.

The scale-invariant feature transform (SIFT) is a computer vision algorithm to detect, describe, and match local features in images, invented by David Lowe in 1999. Applications include object recognition, robotic mapping and navigation, image stitching, 3D modeling, gesture recognition, video tracking, individual identification of wildlife and match moving.

<span class="mw-page-title-main">Image stitching</span> Combining multiple photographic images with overlapping fields of view

Image stitching or photo stitching is the process of combining multiple photographic images with overlapping fields of view to produce a segmented panorama or high-resolution image. Commonly performed through the use of computer software, most approaches to image stitching require nearly exact overlaps between images and identical exposures to produce seamless results, although some stitching algorithms actually benefit from differently exposed images by doing high-dynamic-range imaging in regions of overlap. Some digital cameras can stitch their photos internally.

In computer vision, the fundamental matrix is a 3×3 matrix which relates corresponding points in stereo images. In epipolar geometry, with homogeneous image coordinates, x and x′, of corresponding points in a stereo image pair, Fx describes a line on which the corresponding point x′ on the other image must lie. That means, for all pairs of corresponding points holds

<span class="mw-page-title-main">Corner detection</span> Approach used in computer vision systems

Corner detection is an approach used within computer vision systems to extract certain kinds of features and infer the contents of an image. Corner detection is frequently used in motion detection, image registration, video tracking, image mosaicing, panorama stitching, 3D reconstruction and object recognition. Corner detection overlaps with the topic of interest point detection.

Camera resectioning is the process of estimating the parameters of a pinhole camera model approximating the camera that produced a given photograph or video; it determines which incoming light ray is associated with each pixel on the resulting image. Basically, the process determines the pose of the pinhole camera.

The generalized Hough transform (GHT), introduced by Dana H. Ballard in 1981, is the modification of the Hough transform using the principle of template matching. The Hough transform was initially developed to detect analytically defined shapes. In these cases, we have knowledge of the shape and aim to find out its location and orientation in the image. This modification enables the Hough transform to be used to detect an arbitrary object described with its model.

In computer vision, triangulation refers to the process of determining a point in 3D space given its projections onto two, or more, images. In order to solve this problem it is necessary to know the parameters of the camera projection function from 3D to 2D for the cameras involved, in the simplest case represented by the camera matrices. Triangulation is sometimes also referred to as reconstruction or intersection.

<span class="mw-page-title-main">Bundle adjustment</span> Technique in photogrammetry and computer vision

In photogrammetry and computer stereo vision, bundle adjustment is simultaneous refining of the 3D coordinates describing the scene geometry, the parameters of the relative motion, and the optical characteristics of the camera(s) employed to acquire the images, given a set of images depicting a number of 3D points from different viewpoints. Its name refers to the geometrical bundles of light rays originating from each 3D feature and converging on each camera's optical center, which are adjusted optimally according to an optimality criterion involving the corresponding image projections of all points.

In the fields of computer vision and image analysis, the Harris affine region detector belongs to the category of feature detection. Feature detection is a preprocessing step of several algorithms that rely on identifying characteristic points or interest points so to make correspondences between images, recognize textures, categorize objects or build panoramas.

The Viola–Jones object detection framework is a machine learning object detection framework proposed in 2001 by Paul Viola and Michael Jones. It was motivated primarily by the problem of face detection, although it can be adapted to the detection of other object classes.

Hough transforms are techniques for object detection, a critical step in many implementations of computer vision, or data mining from images. Specifically, the Randomized Hough transform is a probabilistic variant to the classical Hough transform, and is commonly used to detect curves The basic idea of Hough transform (HT) is to implement a voting procedure for all potential curves in the image, and at the termination of the algorithm, curves that do exist in the image will have relatively high voting scores. Randomized Hough transform (RHT) is different from HT in that it tries to avoid conducting the computationally expensive voting process for every nonzero pixel in the image by taking advantage of the geometric properties of analytical curves, and thus improve the time efficiency and reduce the storage requirement of the original algorithm.

<span class="mw-page-title-main">3D reconstruction from multiple images</span> Creation of a 3D model from a set of images

3D reconstruction from multiple images is the creation of three-dimensional models from a set of images. It is the reverse process of obtaining 2D images from 3D scenes.

Foreground detection is one of the major tasks in the field of computer vision and image processing whose aim is to detect changes in image sequences. Background subtraction is any technique which allows an image's foreground to be extracted for further processing.

The circle Hough Transform (CHT) is a basic feature extraction technique used in digital image processing for detecting circles in imperfect images. The circle candidates are produced by “voting” in the Hough parameter space and then selecting local maxima in an accumulator matrix.

In computer vision, rigid motion segmentation is the process of separating regions, features, or trajectories from a video sequence into coherent subsets of space and time. These subsets correspond to independent rigidly moving objects in the scene. The goal of this segmentation is to differentiate and extract the meaningful rigid motion from the background and analyze it. Image segmentation techniques labels the pixels to be a part of pixels with certain characteristics at a particular time. Here, the pixels are segmented depending on its relative movement over a period of time i.e. the time of the video sequence.

In image processing, line detection is an algorithm that takes a collection of n edge points and finds all the lines on which these edge points lie. The most popular line detectors are the Hough transform and convolution-based techniques.

References

  1. 1 2 D. Forsyth and J. Ponce. Computer Vision: A Modern Approach. Prentice Hall. (2002). ISBN   978-0262061582.
  2. 1 2 R. Szeliski. Computer Vision: Algorithms and Applications. Springer Science and Business Media. (2010). ISBN   978-1848829350.
  3. O. Faugeras. Three-dimensional Computer Vision. MIT Press. (1993). ISBN   978-0262061582.
  4. Z. Zhang. "A flexible new technique for camera calibration." IEEE Transactions on Pattern Analysis and Machine Intelligence. vol. 22(11), pp. 1330-1334 (2000).
  5. J. Bouguet, "Camera calibration toolbox for MATLAB". http://www.vision.caltech.edu/bouguetj/calib_doc/. (2013).
  6. C. Harris and M. Stephens. "A combined corner and edge detector." Proceedings of the 4th Alvey Vision Conference. pp. 147-151 (1988).
  7. L. Shapiro and G. Stockman. Computer Vision. Prentice-Hall, Inc. (2001). ISBN   978-0130307965
  8. R. Duda and P. Hart. "Use of the Hough transformation to detect lines and curves in pictures," Comm. ACM, vol. 15, pp. 11-15 (1972).
  9. P. Hough. "Machine analysis of bubble chamber pictures." Proc. Int. Conf. High Energy Accelerators and Instrumentation. (1959).

The following links are pointers to popular implementations of chessboard-related computer vision algorithms.