Block-matching algorithm

Last updated

A Block Matching Algorithm is a way of locating matching macroblocks in a sequence of digital video frames for the purposes of motion estimation. The underlying supposition behind motion estimation is that the patterns corresponding to objects and background in a frame of video sequence move within the frame to form corresponding objects on the subsequent frame. This can be used to discover temporal redundancy in the video sequence, increasing the effectiveness of inter-frame video compression by defining the contents of a macroblock by reference to the contents of a known macroblock which is minimally different.

Contents

Block-matching algorithm.png

A block matching algorithm involves dividing the current frame of a video into macroblocks and comparing each of the macroblocks with a corresponding block and its adjacent neighbors in a nearby frame of the video (sometimes just the previous one). A vector is created that models the movement of a macroblock from one location to another. This movement, calculated for all the macroblocks comprising a frame, constitutes the motion estimated in a frame.

The search area for a good macroblock match is decided by the ‘search parameter’, p, where p is the number of pixels on all four sides of the corresponding macro-block in the previous frame. The search parameter is a measure of motion. The larger the value of p, larger is the potential motion and the possibility for finding a good match. A full search of all potential blocks however is a computationally expensive task. Typical inputs are a macroblock of size 16 pixels and a search area of p = 7 pixels.

Block-matching and 3D filtering makes use of this approach to solve various image restoration inverse problems such as noise reduction [1] and deblurring [2] in both still images and digital video.

Motivation

Motion estimation is the process of determining motion vectors that describe the transformation from one 2D image to another; usually from adjacent frames in a video sequence. The motion vectors may relate to the whole image (global motion estimation) or specific parts, such as rectangular blocks, arbitrary shaped patches or even per pixel. The motion vectors may be represented by a translational model or many other models that can approximate the motion of a real video camera, such as rotation and translation in all three dimensions and zoom.

Applying the motion vectors to an image to predict the transformation to another image, on account of moving camera or object in the image is called motion compensation. The combination of motion estimation and motion compensation is a key part of video compression as used by MPEG 1, 2 and 4 as well as many other video codecs.

Motion estimation based video compression helps in saving bits by sending encoded difference images which have inherently less entropy as opposed to sending a fully coded frame. However, the most computationally expensive and resource extensive operation in the entire compression process is motion estimation. Hence, fast and computationally inexpensive algorithms for motion estimation is a need for video compression.

Evaluation Metrics

A metric for matching a macroblock with another block is based on a cost function. The most popular in terms of computational expense is:

Mean difference or Mean Absolute Difference (MAD) =

Mean Squared Error (MSE) =

where N is the size of the macro-block, and and are the pixels being compared in current macroblock and reference macroblock, respectively.

The motion compensated image that is created using the motion vectors and macroblocks from the reference frame is characterized by Peak signal-to-noise ratio (PSNR),

Algorithms

Block Matching algorithms have been researched since mid-1980s. Many algorithms have been developed, but only some of the most basic or commonly used have been described below.

This algorithm calculates the cost function at each possible location in the search window. This leads to the best possible match of the macro-block in the reference frame with a block in another frame. The resulting motion compensated image has highest peak signal-to-noise ratio as compared to any other block matching algorithm. However this is the most computationally extensive block matching algorithm among all. A larger search window requires greater number of computations.

Optimized Hierarchical Block Matching (OHBM)

The optimized hierarchical block matching (OHBM) algorithm speeds up the exhaustive search based on the optimized image pyramids. [3]

It is one of the earliest fast block matching algorithms. It runs as follows:

  1. Start with search location at center
  2. Set step size S = 4 and search parameter p = 7
  3. Search 8 locations +/- S pixels around location (0,0) and the location (0,0)
  4. Pick among the 9 locations searched, the one with minimum cost function
  5. Set the new search origin to the above picked location
  6. Set the new step size as S = S/2
  7. Repeat the search procedure until S = 1

The resulting location for S=1 is the one with minimum cost function and the macro block at this location is the best match.

There is a reduction in computation by a factor of 9 in this algorithm. For p=7, while ES evaluates cost for 225 macro-blocks, TSS evaluates only for 25 macro blocks.

TDLS is closely related to TSS however it is more accurate for estimating motion vectors for a large search window size. The algorithm can be described as follows,

  1. Start with search location at the center
  2. Select an initial step size say, S = 8
  3. Search for 4 locations at a distance of S from center on the X and Y axes
  4. Find the location of point with least cost function
  5. If a point other than center is the best matching point,
    1. Select this point as the new center
  6. If the best matching point is at the center, set S = S/2
  7. Repeat steps 2 to 3
  8. If S = 1, all 8 locations around the center at a distance S are searched
  9. Set the motion vector as the point with least cost function

TSS uses a uniformly allocated checking pattern and is prone to miss small motions. NTSS [4] is an improvement over TSS as it provides a center biased search scheme and has provisions to stop halfway to reduce the computational cost. It was one of the first widely accepted fast algorithms and frequently used for implementing earlier standards like MPEG 1 and H.261.

The algorithm runs as follows:

  1. Start with search location at center
  2. Search 8 locations +/- S pixels with S = 4 and 8 locations +/- S pixels with S = 1 around location (0,0)
  3. Pick among the 16 locations searched, the one with minimum cost function
  4. If the minimum cost function occurs at origin, stop the search and set motion vector to (0,0)
  5. If the minimum cost function occurs at one of the 8 locations at S = 1, set the new search origin to this location
    1. Check adjacent weights for this location, depending on location it may check either 3 or 5 points
  6. The one that gives lowest weight is the closest match, set the motion vector to that location
  7. If the lowest weight after the first step was one of the 8 locations at S = 4, the normal TSS procedure follows
    1. Pick among the 9 locations searched, the one with minimum cost function
    2. Set the new search origin to the above picked location
    3. Set the new step size as S = S/2
    4. Repeat the search procedure until S = 1

Thus this algorithm checks 17 points for each macro-block and the worst-case scenario involves checking 33 locations, which is still much faster than TSS

The idea behind TSS is that the error surface due to motion in every macro block is unimodal. A unimodal surface is a bowl shaped surface such that the weights generated by the cost function increase monotonically from the global minimum. However a unimodal surface cannot have two minimums in opposite directions and hence the 8 point fixed pattern search of TSS can be further modified to incorporate this and save computations. SES [5] is the extension of TSS that incorporates this assumption.

SES algorithm improves upon TSS algorithm as each search step in SES is divided into two phases:

• First Phase :

     • Divide the area of search in four quadrants      • Start search with three locations, one at center (A) and others (B and C), S=4 locations away from A in orthogonal directions      • Find points in search quadrant for second phase using the weight distribution for A, B, C:             • If (MAD(A)>=MAD(B) and MAD(A)>=MAD(C)), select points in second phase quadrant IV             • If (MAD(A)>=MAD(B) and MAD(A)<=MAD(C)), select points in second phase quadrant I             • If (MAD(A)<MAD(B) and MAD(A)<MAD(C)), select points in second phase quadrant II             • If (MAD(A)<MAD(B) and MAD(A)>=MAD(C)), select points in second phase quadrant III

• Second Phase:

     • Find the location with lowest weight      • Set the new search origin as the point found above

• Set the new step size as S = S/2

• Repeat the SES search procedure until S=1

• Select the location with lowest weight as motion vector SES is computationally very efficient as compared to TSS. However the peak signal-to-noise ratio achieved is poor as compared to TSS as the error surfaces are not strictly unimodal in reality.

Four Step Search is an improvement over TSS in terms of lower computational cost and better peak signal-to-noise ratio. Similar to NTSS, FSS [6] also employs center biased searching and has a halfway stop provision.

The algorithm runs as follows:

  1. Start with search location at center
  2. Set step size S = 2, (irrespective of search parameter p)
  3. Search 8 locations +/- S pixels around location (0,0)
  4. Pick among the 9 locations searched, the one with minimum cost function
  5. If the minimum weight is found at center for search window:
    1. Set the new step size as S = S/2 (that is S = 1)
    2. Repeat the search procedure from steps 3 to 4
    3. Select location with the least weight as motion vector
  6. If the minimum weight is found at one of the 8 locations other than the center:
    1. Set the new origin to this location
    2. Fix the step size as S = 2
    3. Repeat the search procedure from steps 3 to 4. Depending on location of new origin, search through 5 locations or 3 locations
    4. Select the location with the least weight
    5. If the least weight location is at the center of new window go to step 5, else go to step 6

Diamond Search (DS) [7] algorithm uses a diamond search point pattern and the algorithm runs exactly the same as 4SS. However, there is no limit on the number of steps that the algorithm can take.

Two different types of fixed patterns are used for search,

The algorithm runs as follows:

    1. Start with search location at center
    2. Set step size S = 2
    3. Search 8 locations pixels (X,Y) such that (|X|+|Y|=S) around location (0,0) using a diamond search point pattern
    4. Pick among the 9 locations searched, the one with minimum cost function
    5. If the minimum weight is found at center for search window, go to SDSP step
    6. If the minimum weight is found at one of the 8 locations other than the center, set the new origin to this location
    7. Repeat LDSP
    1. Set the new search origin
    2. Set the new step size as S = S/2 (that is S = 1)
    3. Repeat the search procedure to find location with least weight
    4. Select location with the least weight as motion vector

This algorithm finds the global minimum very accurately as the search pattern is neither too big nor too small. Diamond Search algorithm has a peak signal-to-noise ratio close to that of Exhaustive Search with significantly less computational expense.

Adaptive rood pattern search (ARPS) [8] algorithm makes use of the fact that the general motion in a frame is usually coherent, i.e. if the macro blocks around the current macro block moved in a particular direction then there is a high probability that the current macro block will also have a similar motion vector. This algorithm uses the motion vector of the macro block to its immediate left to predict its own motion vector.

Adaptive rood pattern search runs as follows:

  1. Start with search location at the center (origin)
  2. Find the predicted motion vector for the block
  3. Set step size S = max (|X|,|Y|), where (X,Y) is the coordinate of predicted motion vector
  4. Search for rood pattern distributed points around the origin at step size S
  5. Set the point with least weight as origin
  6. Search using small diamond search pattern (SDSP) around the new origin
  7. Repeat SDSP search until least weighted point is at the center of SDSP

Rood pattern search directly puts the search in an area where there is a high probability of finding a good matching block. The main advantage of ARPS over DS is if the predicted motion vector is (0, 0), it does not waste computational time in doing LDSP, but it directly starts using SDSP. Furthermore, if the predicted motion vector is far away from the center, then again ARPS saves on computations by directly jumping to that vicinity and using SDSP, whereas DS takes its time doing LDSP.

Related Research Articles

Frame rate, most commonly expressed in frames per second or FPS, is typically the frequency (rate) at which consecutive images (frames) are captured or displayed. This definition applies to film and video cameras, computer animation, and motion capture systems. In these contexts, frame rate may be used interchangeably with frame frequency and refresh rate, which are expressed in hertz. Additionally, in the context of computer graphics performance, FPS is the rate at which a system, particularly a GPU, is able to generate frames, and refresh rate is the frequency at which a display shows completed frames. In electronic camera specifications frame rate refers to the maximum possible rate frames could be captured, but in practice, other settings may reduce the actual frequency to a lower number than the frame rate.

Vector quantization (VQ) is a classical quantization technique from signal processing that allows the modeling of probability density functions by the distribution of prototype vectors. Developed in the early 1980s by Robert M. Gray, it was originally used for data compression. It works by dividing a large set of points (vectors) into groups having approximately the same number of points closest to them. Each group is represented by its centroid point, as in k-means and some other clustering algorithms. In simpler terms, vector quantization chooses a set of points to represent a larger set of points.

<span class="mw-page-title-main">Motion compensation</span> Video compression technique, used to efficiently predict and generate video frames

Motion compensation in computing is an algorithmic technique used to predict a frame in a video given the previous and/or future frames by accounting for motion of the camera and/or objects in the video. It is employed in the encoding of video data for video compression, for example in the generation of MPEG-2 files. Motion compensation describes a picture in terms of the transformation of a reference picture to the current picture. The reference picture may be previous in time or even from the future. When images can be accurately synthesized from previously transmitted/stored images, the compression efficiency can be improved.

<span class="mw-page-title-main">Compression artifact</span> Distortion of media caused by lossy data compression

A compression artifact is a noticeable distortion of media caused by the application of lossy compression. Lossy data compression involves discarding some of the media's data so that it becomes small enough to be stored within the desired disk space or transmitted (streamed) within the available bandwidth. If the compressor cannot store enough data in the compressed version, the result is a loss of quality, or introduction of artifacts. The compression algorithm may not be intelligent enough to discriminate between distortions of little subjective importance and those objectionable to the user.

<span class="mw-page-title-main">Ray casting</span> Methodological basis for 3D CAD/CAM solid modeling and image rendering

Ray casting is the methodological basis for 3D CAD/CAM solid modeling and image rendering. It is essentially the same as ray tracing for computer graphics where virtual light rays are "cast" or "traced" on their path from the focal point of a camera through each pixel in the camera sensor to determine what is visible along the ray in the 3D scene. The term "Ray Casting" was introduced by Scott Roth while at the General Motors Research Labs from 1978–1980. His paper, "Ray Casting for Modeling Solids", describes modeled solid objects by combining primitive solids, such as blocks and cylinders, using the set operators union (+), intersection (&), and difference (-). The general idea of using these binary operators for solid modeling is largely due to Voelcker and Requicha's geometric modelling group at the University of Rochester. See solid modeling for a broad overview of solid modeling methods. This figure on the right shows a U-Joint modeled from cylinders and blocks in a binary tree using Roth's ray casting system in 1979.

In the field of video compression a video frame is compressed using different algorithms with different advantages and disadvantages, centered mainly around amount of data compression. These different algorithms for video frames are called picture types or frame types. The three major picture types used in the different video algorithms are I, P and B. They are different in the following characteristics:

Template matching is a technique in digital image processing for finding small parts of an image which match a template image. It can be used for quality control in manufacturing, navigation of mobile robots, or edge detection in images.

The scale-invariant feature transform (SIFT) is a computer vision algorithm to detect, describe, and match local features in images, invented by David Lowe in 1999. Applications include object recognition, robotic mapping and navigation, image stitching, 3D modeling, gesture recognition, video tracking, individual identification of wildlife and match moving.

An inter frame is a frame in a video compression stream which is expressed in terms of one or more neighboring frames. The "inter" part of the term refers to the use of Inter frame prediction. This kind of prediction tries to take advantage from temporal redundancy between neighboring frames enabling higher compression rates.

<span class="mw-page-title-main">Motion estimation</span> Process used in video coding/compression

In computer vision and image processing, motion estimation is the process of determining motion vectors that describe the transformation from one 2D image to another; usually from adjacent frames in a video sequence. It is an ill-posed problem as the motion happens in three dimensions (3D) but the images are a projection of the 3D scene onto a 2D plane. The motion vectors may relate to the whole image or specific parts, such as rectangular blocks, arbitrary shaped patches or even per pixel. The motion vectors may be represented by a translational model or many other models that can approximate the motion of a real video camera, such as rotation and translation in all three dimensions and zoom.

Global motion compensation(GMC) is a motion compensation technique used in video compression to reduce the bitrate required to encode video. It is most commonly used in MPEG-4 ASP, such as with the DivX and Xvid codecs.

<span class="mw-page-title-main">Image stitching</span> Combining multiple photographic images with overlapping fields of view

Image stitching or photo stitching is the process of combining multiple photographic images with overlapping fields of view to produce a segmented panorama or high-resolution image. Commonly performed through the use of computer software, most approaches to image stitching require nearly exact overlaps between images and identical exposures to produce seamless results, although some stitching algorithms actually benefit from differently exposed images by doing high-dynamic-range imaging in regions of overlap. Some digital cameras can stitch their photos internally.

<span class="mw-page-title-main">Approximate string matching</span> Finding strings that approximately match a pattern

In computer science, approximate string matching is the technique of finding strings that match a pattern approximately. The problem of approximate string matching is typically divided into two sub-problems: finding approximate substring matches inside a given string and finding dictionary strings that match the pattern approximately.

The macroblock is a processing unit in image and video compression formats based on linear block transforms, typically the discrete cosine transform (DCT). A macroblock typically consists of 16×16 samples, and is further subdivided into transform blocks, and may be further subdivided into prediction blocks. Formats which are based on macroblocks include JPEG, where they are called MCU blocks, H.261, MPEG-1 Part 2, H.262/MPEG-2 Part 2, H.263, MPEG-4 Part 2, and H.264/MPEG-4 AVC. In H.265/HEVC, the macroblock as a basic processing unit has been replaced by the coding tree unit.

Television standards conversion is the process of changing a television transmission or recording from one video system to another. Converting video between different numbers of lines, frame rates, and color models in video pictures is a complex technical problem. However, the international exchange of television programming makes standards conversion necessary so that video may be viewed in another nation with a differing standard. Typically video is fed into video standards converter which produces a copy according to a different video standard. One of the most common conversions is between the NTSC and PAL standards.

Mean shift is a non-parametric feature-space mathematical analysis technique for locating the maxima of a density function, a so-called mode-seeking algorithm. Application domains include cluster analysis in computer vision and image processing.

Local binary patterns (LBP) is a type of visual descriptor used for classification in computer vision. LBP is the particular case of the Texture Spectrum model proposed in 1990. LBP was first described in 1994. It has since been found to be a powerful feature for texture classification; it has further been determined that when LBP is combined with the Histogram of oriented gradients (HOG) descriptor, it improves the detection performance considerably on some datasets. A comparison of several improvements of the original LBP in the field of background subtraction was made in 2015 by Silva et al. A full survey of the different versions of LBP can be found in Bouwmans et al.

<span class="mw-page-title-main">Saliency map</span> Type of image

In computer vision, a saliency map is an image that highlights either the region on which people's eyes focus first or the most relevant regions for machine learning models. The goal of a saliency map is to reflect the degree of importance of a pixel to the human visual system or an otherwise opaque ML model.

<span class="mw-page-title-main">Block-matching and 3D filtering</span> Algorithm for noise reduction in images

Block-matching and 3D filtering (BM3D) is a 3-D block-matching algorithm used primarily for noise reduction in images. It is one of the expansions of the non-local means methodology. There are two cascades in BM3D: a hard-thresholding and a Wiener filter stage, both involving the following parts: grouping, collaborative filtering, and aggregation. This algorithm depends on an augmented representation in the transformation site.

Semi-global matching (SGM) is a computer vision algorithm for the estimation of a dense disparity map from a rectified stereo image pair, introduced in 2005 by Heiko Hirschmüller while working at the German Aerospace Center. Given its predictable run time, its favourable trade-off between quality of the results and computing time, and its suitability for fast parallel implementation in ASIC or FPGA, it has encountered wide adoption in real-time stereo vision applications such as robotics and advanced driver assistance systems.

References

  1. Dabov, Kostadin; Foi, Alessandro; Katkovnik, Vladimir; Egiazarian, Karen (16 July 2007). "Image denoising by sparse 3D transform-domain collaborative filtering". IEEE Transactions on Image Processing. 16 (8): 2080–2095. Bibcode:2007ITIP...16.2080D. CiteSeerX   10.1.1.219.5398 . doi:10.1109/TIP.2007.901238. PMID   17688213. S2CID   1475121.
  2. Danielyan, Aram; Katkovnik, Vladimir; Egiazarian, Karen (30 June 2011). "BM3D Frames and Variational Image Deblurring". IEEE Transactions on Image Processing. 21 (4): 1715–28. arXiv: 1106.6180 . Bibcode:2012ITIP...21.1715D. doi:10.1109/TIP.2011.2176954. PMID   22128008. S2CID   11204616.
  3. Je, Changsoo; Park, Hyung-Min (2013). "Optimized hierarchical block matching for fast and accurate image registration". Signal Processing: Image Communication. 28 (7): 779–791. doi:10.1016/j.image.2013.04.002.
  4. Li, Renxiang; Zeng, Bing; Liou, Ming (August 1994). "A New Three-Step Search Algorithm for Block Motion Estimation". IEEE Transactions on Circuits and Systems for Video Technology. 4 (4): 438–442. doi:10.1109/76.313138.
  5. Lu, Jianhua; Liou, Ming (April 1997). "A Simple and Efficient Search Algorithm for Block-Matching Motion Estimation". IEEE Transactions on Circuits and Systems for Video Technology. 7 (2): 429–433. doi:10.1109/76.564122.
  6. Po, Lai-Man; Ma, Wing-Chung (June 1996). "A Novel Four-Step Search Algorithm for Fast Block Motion Estimation". IEEE Transactions on Circuits and Systems for Video Technology. 6 (3): 313–317. doi:10.1109/76.499840.
  7. Zhu, Shan; Ma, Kai-Kuang (February 2000). "A New Diamond Search Algorithm for Fast Block-Matching Motion Estimation". IEEE Transactions on Image Processing. 9 (12): 287–290. Bibcode:2000ITIP....9..287Z. doi:10.1109/83.821744. PMID   18255398.
  8. Nie, Yao; Ma, Kai-Kuang (December 2002). "Adaptive Rood Pattern Search for Fast Block-Matching Motion Estimation" (PDF). IEEE Transactions on Image Processing. 11 (12): 1442–1448. Bibcode:2002ITIP...11.1442N. doi:10.1109/TIP.2002.806251. PMID   18249712.

1. http://www.mathworks.com/matlabcentral/fileexchange/8761-block-matching-algorithms-for-motion-estimation

2. https://www.ece.cmu.edu/~ee899/project/deepak_mid.htm