Connected-component labeling

Last updated

Connected-component labeling (CCL), connected-component analysis (CCA), blob extraction, region labeling, blob discovery, or region extraction is an algorithmic application of graph theory, where subsets of connected components are uniquely labeled based on a given heuristic. Connected-component labeling is not to be confused with segmentation.

Contents

Connected-component labeling is used in computer vision to detect connected regions in binary digital images, although color images and data with higher dimensionality can also be processed. [1] [2] When integrated into an image recognition system or human-computer interaction interface, connected component labeling can operate on a variety of information. [3] [4] Blob extraction is generally performed on the resulting binary image from a thresholding step, but it can be applicable to gray-scale and color images as well. Blobs may be counted, filtered, and tracked.

Blob extraction is related to but distinct from blob detection.

Overview

4-connectivity Square 4 connectivity.png
4-connectivity
8-connectivity Square 8 connectivity.png
8-connectivity

A graph, containing vertices and connecting edges, is constructed from relevant input data. The vertices contain information required by the comparison heuristic, while the edges indicate connected 'neighbors'. An algorithm traverses the graph, labeling the vertices based on the connectivity and relative values of their neighbors. Connectivity is determined by the medium; image graphs, for example, can be 4-connected neighborhood or 8-connected neighborhood. [5]

Following the labeling stage, the graph may be partitioned into subsets, after which the original information can be recovered and processed .

Definition

The usage of the term connected-components labeling (CCL) and its definition is quite consistent in the academic literature, whereas connected-components analysis (CCA) varies in terms of both terminology and problem definition.

Rosenfeld et al. [6] define connected components labeling as the “[c]reation of a labeled image in which the positions associated with the same connected component of the binary input image have a unique label.” Shapiro et al. [7] define CCL as an operator whose “input is a binary image and [...] output is a symbolic image in which the label assigned to each pixel is an integer uniquely identifying the connected component to which that pixel belongs.” [8]

There is no consensus on the definition of CCA in the academic literature. It is often used interchangeably with CCL. [9] [10] A more extensive definition is given by Shapiro et al.: [7] “Connected component analysis consists of connected component labeling of the black pixels followed by property measurement of the component regions and decision making.” The definition for connected-component analysis presented here is more general, taking the thoughts expressed in [7] [9] [10] into account.

Algorithms

The algorithms discussed can be generalized to arbitrary dimensions, albeit with increased time and space complexity.

One component at a time

This is a fast and very simple method to implement and understand. It is based on graph traversal methods in graph theory. In short, once the first pixel of a connected component is found, all the connected pixels of that connected component are labelled before going onto the next pixel in the image. This algorithm is part of Vincent and Soille's watershed segmentation algorithm, [11] other implementations also exist. [12]

In order to do that a linked list is formed that will keep the indexes of the pixels that are connected to each other, steps (2) and (3) below. The method of defining the linked list specifies the use of a depth or a breadth first search. For this particular application, there is no difference which strategy to use. The simplest kind of a last in first out queue implemented as a singly linked list will result in a depth first search strategy.

It is assumed that the input image is a binary image, with pixels being either background or foreground and that the connected components in the foreground pixels are desired. The algorithm steps can be written as:

  1. Start from the first pixel in the image. Set current label to 1. Go to (2).
  2. If this pixel is a foreground pixel and it is not already labelled, give it the current label and add it as the first element in a queue, then go to (3). If it is a background pixel or it was already labelled, then repeat (2) for the next pixel in the image.
  3. Pop out an element from the queue, and look at its neighbours (based on any type of connectivity). If a neighbour is a foreground pixel and is not already labelled, give it the current label and add it to the queue. Repeat (3) until there are no more elements in the queue.
  4. Go to (2) for the next pixel in the image and increment current label by 1.

Note that the pixels are labelled before being put into the queue. The queue will only keep a pixel to check its neighbours and add them to the queue if necessary. This algorithm only needs to check the neighbours of each foreground pixel once and doesn't check the neighbours of background pixels.

The pseudocode is :

algorithm OneComponentAtATime(data)      input : imageData[xDim][yDim]      initialization : label = 0, labelArray[xDim][yDim] = 0, statusArray[xDim][yDim] = false, queue1, queue2;      for i = 0 to xDim dofor j = 0 to yDim doif imageData[i][j] has not been processed doif imageData[i][j] is a foreground pixel do                            check it four neighbors(north, south, east, west) :                            if neighbor is not processed doif neighbor is a foreground pixel do                                       add it to the queue1                                 else                                         update its status as processed                                 end if                                 labelArray[i][j] = label (give label)                                 statusArray[i][j] = true (update status)                                 While queue1 is not empty do                                         For each pixel in the queue do :                                         check it fours neighbors                                          if neighbor is not processed doif neighbor is a foreground pixel do                                                   add it to the queue2                                              else                                                     update its status as processed                                              end if                                              give it the current label                                              update its status as processed                                              remove the current element from queue1                                              copy queue2 into queue1                                 end While                                 increase the label                                 end ifelse                                update its status as processed                         end ifend ifend ifend forend for

Two-pass

Relatively simple to implement and understand, the two-pass algorithm, [13] (also known as the Hoshen–Kopelman algorithm) iterates through 2-dimensional binary data. The algorithm makes two passes over the image: the first pass to assign temporary labels and record equivalences, and the second pass to replace each temporary label by the smallest label of its equivalence class.

The input data can be modified in situ (which carries the risk of data corruption), or labeling information can be maintained in an additional data structure.

Connectivity checks are carried out by checking neighbor pixels' labels (neighbor elements whose labels are not assigned yet are ignored), or say, the north-east, the north, the north-west and the west of the current pixel (assuming 8-connectivity). 4-connectivity uses only north and west neighbors of the current pixel. The following conditions are checked to determine the value of the label to be assigned to the current pixel (4-connectivity is assumed)

Conditions to check:

  1. Does the pixel to the left (west) have the same value as the current pixel?
    1. Yes – We are in the same region. Assign the same label to the current pixel
    2. No – Check next condition
  2. Do both pixels to the north and west of the current pixel have the same value as the current pixel but not the same label?
    1. Yes – We know that the north and west pixels belong to the same region and must be merged. Assign the current pixel the minimum of the north and west labels, and record their equivalence relationship
    2. No – Check next condition
  3. Does the pixel to the left (west) have a different value and the one to the north the same value as the current pixel?
    1. Yes – Assign the label of the north pixel to the current pixel
    2. No – Check next condition
  4. Do the pixel's north and west neighbors have different pixel values than current pixel?
    1. Yes – Create a new label id and assign it to the current pixel

The algorithm continues this way, and creates new region labels whenever necessary. The key to a fast algorithm, however, is how this merging is done. This algorithm uses the union-find data structure which provides excellent performance for keeping track of equivalence relationships. [14] Union-find essentially stores labels which correspond to the same blob in a disjoint-set data structure, making it easy to remember the equivalence of two labels by the use of an interface method E.g.: findSet(l). findSet(l) returns the minimum label value that is equivalent to the function argument 'l'.

Once the initial labeling and equivalence recording is completed, the second pass merely replaces each pixel label with its equivalent disjoint-set representative element.

A faster-scanning algorithm for connected-region extraction is presented below. [15]

On the first pass:

  1. Iterate through each element of the data by column, then by row (Raster Scanning)
  2. If the element is not the background
    1. Get the neighboring elements of the current element
    2. If there are no neighbors, uniquely label the current element and continue
    3. Otherwise, find the neighbor with the smallest label and assign it to the current element
    4. Store the equivalence between neighboring labels

On the second pass:

  1. Iterate through each element of the data by column, then by row
  2. If the element is not the background
    1. Relabel the element with the lowest equivalent label

Here, the background is a classification, specific to the data, used to distinguish salient elements from the foreground. If the background variable is omitted, then the two-pass algorithm will treat the background as another region.

Graphical example of two-pass algorithm

1. The array from which connected regions are to be extracted is given below (8-connectivity based).

We first assign different binary values to elements in the graph. The values "0~1" at the center of each of the elements in the following graph are the elements' values, whereas the "1,2,...,7" values in the next two graphs are the elements' labels. The two concepts should not be confused.

Screenshot-Pixel Region (Figure 1).png

2. After the first pass, the following labels are generated:

Screenshot-Pixel Region (Figure 2).png

A total of 7 labels are generated in accordance with the conditions highlighted above.

The label equivalence relationships generated are,

Set IDEquivalent Labels
11,2
21,2
33,4,5,6,7
43,4,5,6,7
53,4,5,6,7
63,4,5,6,7
73,4,5,6,7

3. Array generated after the merging of labels is carried out. Here, the label value that was the smallest for a given region "floods" throughout the connected region and gives two distinct labels, and hence two distinct labels.

Screenshot-Pixel Region (Figure 3).png

4. Final result in color to clearly see two different regions that have been found in the array.

Screenshot-Figure 1.png

Sample graphical output from running the two-pass algorithm on a binary image. The first image is unprocessed, while the last one has been recolored with label information. Darker hues indicate the neighbors of the pixel being processed. Two-pass connected component labeling.svg
Sample graphical output from running the two-pass algorithm on a binary image. The first image is unprocessed, while the last one has been recolored with label information. Darker hues indicate the neighbors of the pixel being processed.

The pseudocode is:

algorithm TwoPass(data) is     linked = []     labels = structure with dimensions of data, initialized with the value of Background     NextLabel = 0      First passfor row in data dofor column in row doif data[row][column] is not Background then                    neighbors = connected elements with the current element's value                    if neighbors is empty then                     linked[NextLabel] = set containing NextLabel                     labels[row][column] = NextLabel                     NextLabel += 1                    elseFind the smallest label                        L = neighbors labels                     labels[row][column] = min(L)                     for label in L do                         linked[label] = union(linked[label], L)        Second passfor row in data dofor column in row doif data[row][column] is not Background then                 labels[row][column] = find(labels[row][column])        return labels

The find and union algorithms are implemented as described in union find.

Sequential algorithm

Create a region counter

Scan the image (in the following example, it is assumed that scanning is done from left to right and from top to bottom):

Others

Some of the steps present in the two-pass algorithm can be merged for efficiency, allowing for a single sweep through the image. Multi-pass algorithms also exist, some of which run in linear time relative to the number of image pixels. [16]

In the early 1990s, there was considerable interest in parallelizing connected-component algorithms in image analysis applications, due to the bottleneck of sequentially processing each pixel. [17]

The interest to the algorithm arises again with an extensive use of CUDA.

Pseudocode for the one-component-at-a-time algorithm

Algorithm:

  1. Connected-component matrix is initialized to size of image matrix.
  2. A mark is initialized and incremented for every detected object in the image.
  3. A counter is initialized to count the number of objects.
  4. A row-major scan is started for the entire image.
  5. If an object pixel is detected, then following steps are repeated while (Index !=0)
    1. Set the corresponding pixel to 0 in Image.
    2. A vector (Index) is updated with all the neighboring pixels of the currently set pixels.
    3. Unique pixels are retained and repeated pixels are removed.
    4. Set the pixels indicated by Index to mark in the connected-component matrix.
  6. Increment the marker for another object in the image.
One-Component-at-a-Time(image)     [M, N] := size(image)     connected := zeros(M, N)     mark := value     difference := increment     offsets := [-1; M; 1; -M]     index := []     no_of_objects := 0      for i: 1:Mdofor j: 1:Ndoif (image(i, j) == 1) thenno_of_objects := no_of_objects + 1                 index := [((j-1) × M + i)]                 connected(index) := markwhile ~isempty(index) doimage(index) := 0                     neighbors := bsxfun(@plus, index, offsets)                     neighbors := unique(neighbors(:))                     index := neighbors(find(image(neighbors)))                     connected(index) := markend whilemark := mark + differenceend ifend forend for

The run time of the algorithm depends on the size of the image and the amount of foreground. The time complexity is comparable to the two pass algorithm if the foreground covers a significant part of the image. Otherwise the time complexity is lower. However, memory access is less structured than for the two-pass algorithm, which tends to increase the run time in practice.

Performance evaluation

In the last two decades many novel approaches to connected-component labeling have been proposed, but almost none of them have been subjected to a comparative performance assessment using the same data set. YACCLAB [18] [19] (acronym for Yet Another Connected Components Labeling Benchmark) is an example of C++ open source framework which collects, runs, and tests connected-component labeling algorithms.

Hardware architectures

The emergence of FPGAs with enough capacity to perform complex image processing tasks also led to high-performance architectures for connected-component labeling. [20] [21] Most of these architectures utilize the single pass variant of this algorithm, because of the limited memory resources available on an FPGA. These types of connected component labeling architectures can process several image pixels in parallel, thereby achieving high throughput and low processing latency.

See also

Related Research Articles

<span class="mw-page-title-main">Flood fill</span> Algorithm in computer graphics to add color or texture

Flood fill, also called seed fill, is a flooding algorithm that determines and alters the area connected to a given node in a multi-dimensional array with some matching attribute. It is used in the "bucket" fill tool of paint programs to fill connected, similarly-colored areas with a different color, and in games such as Go and Minesweeper for determining which pieces are cleared. A variant called boundary fill uses the same algorithms but is defined as the area connected to a given node that does not have a particular attribute.

<span class="mw-page-title-main">Dijkstra's algorithm</span> Graph search algorithm

Dijkstra's algorithm is an algorithm for finding the shortest paths between nodes in a weighted graph, which may represent, for example, road networks. It was conceived by computer scientist Edsger W. Dijkstra in 1956 and published three years later.

<span class="mw-page-title-main">Component (graph theory)</span> Maximal subgraph whose vertices can reach each other

In graph theory, a component of an undirected graph is a connected subgraph that is not part of any larger connected subgraph. The components of any graph partition its vertices into disjoint sets, and are the induced subgraphs of those sets. A graph that is itself connected has exactly one component, consisting of the whole graph. Components are sometimes called connected components.

<span class="mw-page-title-main">Image segmentation</span> Partitioning a digital image into segments

In digital image processing and computer vision, image segmentation is the process of partitioning a digital image into multiple image segments, also known as image regions or image objects. The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. Image segmentation is typically used to locate objects and boundaries in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain characteristics.

<span class="mw-page-title-main">Quadtree</span> Tree data structure in which each internal node has exactly four children, to partition a 2D area

A quadtree is a tree data structure in which each internal node has exactly four children. Quadtrees are the two-dimensional analog of octrees and are most often used to partition a two-dimensional space by recursively subdividing it into four quadrants or regions. The data associated with a leaf cell varies by application, but the leaf cell represents a "unit of interesting spatial information".

<span class="mw-page-title-main">Strongly connected component</span> Partition of a graph whose components are reachable from all vertices

In the mathematical theory of directed graphs, a graph is said to be strongly connected if every vertex is reachable from every other vertex. The strongly connected components of an arbitrary directed graph form a partition into subgraphs that are themselves strongly connected. It is possible to test the strong connectivity of a graph, or to find its strongly connected components, in linear time (that is, Θ(V + E)).

Simple interactive object extraction (SIOX) is an algorithm for extracting foreground objects from color images and videos with very little user interaction. It has been implemented as "foreground selection" tool in the GIMP, as part of the tracer tool in Inkscape, and as function in ImageJ and Fiji (plug-in). Experimental implementations were also reported for Blender and Krita. Although the algorithm was originally designed for videos, virtually all implementations use SIOX primarily for still image segmentation. In fact, it is often said to be the current de facto standard for this task in the open-source world.

<span class="mw-page-title-main">Biconnected component</span> Maximal biconnected subgraph

In graph theory, a biconnected component is a maximal biconnected subgraph. Any connected graph decomposes into a tree of biconnected components called the block-cut tree of the graph. The blocks are attached to each other at shared vertices called cut vertices or separating vertices or articulation points. Specifically, a cut vertex is any vertex whose removal increases the number of connected components.

Digital topology deals with properties and features of two-dimensional (2D) or three-dimensional (3D) digital images that correspond to topological properties or topological features of objects.

In computer science, graph traversal refers to the process of visiting each vertex in a graph. Such traversals are classified by the order in which the vertices are visited. Tree traversal is a special case of graph traversal.

As applied in the field of computer vision, graph cut optimization can be employed to efficiently solve a wide variety of low-level computer vision problems, such as image smoothing, the stereo correspondence problem, image segmentation, object co-segmentation, and many other computer vision problems that can be formulated in terms of energy minimization. Many of these energy minimization problems can be approximated by solving a maximum flow problem in a graph. Under most formulations of such problems in computer vision, the minimum energy solution corresponds to the maximum a posteriori estimate of a solution. Although many computer vision algorithms involve cutting a graph, the term "graph cuts" is applied specifically to those models which employ a max-flow/min-cut optimization.

<span class="mw-page-title-main">Watershed (image processing)</span>

In the study of image processing, a watershed is a transformation defined on a grayscale image. The name refers metaphorically to a geological watershed, or drainage divide, which separates adjacent drainage basins. The watershed transformation treats the image it operates upon like a topographic map, with the brightness of each point representing its height, and finds the lines that run along the tops of ridges.

<span class="mw-page-title-main">DBSCAN</span> Density-based data clustering algorithm

Density-based spatial clustering of applications with noise (DBSCAN) is a data clustering algorithm proposed by Martin Ester, Hans-Peter Kriegel, Jörg Sander and Xiaowei Xu in 1996. It is a density-based clustering non-parametric algorithm: given a set of points in some space, it groups together points that are closely packed together, marking as outliers points that lie alone in low-density regions . DBSCAN is one of the most common, and most commonly cited, clustering algorithms.

In computer vision, maximally stable extremal regions (MSER) are used as a method of blob detection in images. This technique was proposed by Matas et al. to find correspondences between image elements from two images with different viewpoints. This method of extracting a comprehensive number of corresponding image elements contributes to the wide-baseline matching, and it has led to better stereo matching and object recognition algorithms.

Image segmentation strives to partition a digital image into regions of pixels with similar properties, e.g. homogeneity. The higher-level region representation simplifies image analysis tasks such as counting objects or detecting changes, because region attributes can be compared more readily than raw pixels.

In applied mathematics, lambda-connectedness deals with partial connectivity for a discrete space.

In image processing, a kernel, convolution matrix, or mask is a small matrix used for blurring, sharpening, embossing, edge detection, and more. This is accomplished by doing a convolution between the kernel and an image. Or more simply, when each pixel in the output image is a function of the nearby pixels in the input image, the kernel is that function.

This is a glossary of terms relating to computer graphics.

<span class="mw-page-title-main">Hoshen–Kopelman algorithm</span> Algorithm for labeling clusters on a grid

The Hoshen–Kopelman algorithm is a simple and efficient algorithm for labeling clusters on a grid, where the grid is a regular network of cells, with the cells being either occupied or unoccupied. This algorithm is based on a well-known union-finding algorithm. The algorithm was originally described by Joseph Hoshen and Raoul Kopelman in their 1976 paper "Percolation and Cluster Distribution. I. Cluster Multiple Labeling Technique and Critical Concentration Algorithm".

References

  1. Samet, H.; Tamminen, M. (1988). "Efficient Component Labeling of Images of Arbitrary Dimension Represented by Linear Bintrees". IEEE Transactions on Pattern Analysis and Machine Intelligence. 10 (4): 579. doi:10.1109/34.3918. S2CID   15911227.
  2. Michael B. Dillencourt; Hannan Samet; Markku Tamminen (1992). "A general approach to connected-component labeling for arbitrary image representations". Journal of the ACM. 39 (2): 253. CiteSeerX   10.1.1.73.8846 . doi:10.1145/128749.128750. S2CID   1869184.
  3. Weijie Chen; Maryellen L. Giger; Ulrich Bick (2006). "A Fuzzy C-Means (FCM)-Based Approach for Computerized Segmentation of Breast Lesions in Dynamic Contrast-Enhanced MR Images". Academic Radiology. 13 (1): 63–72. doi:10.1016/j.acra.2005.08.035. PMID   16399033.
  4. Kesheng Wu; Wendy Koegler; Jacqueline Chen; Arie Shoshani (2003). "Using Bitmap Index for Interactive Exploration of Large part Datasets". SSDBM.
  5. R. Fisher; S. Perkins; A. Walker; E. Wolfart (2003). "Connected Component Labeling".
  6. Rosenfeld, Azriel; Pfaltz, John L. (October 1966). "Sequential Operations in Digital Picture Processing". J. ACM. 13 (4): 471–494. doi:10.1145/321356.321357. ISSN   0004-5411. S2CID   7391071.
  7. 1 2 3 Shapiro, Linda G. (1996). "Connected Component Labeling and Adjacency Graph Construction". Topological Algorithms for Digital Image Processing. Machine Intelligence and Pattern Recognition. Vol. 19. pp. 1–30. doi:10.1016/s0923-0459(96)80011-5. ISBN   9780444897541.
  8. Klaiber, Michael J. (2016). A Parallel and Resource-Efficient Single Lookup Connected Components Analysis Architecture for Reconfigurable Hardware. University of Stuttgart.
  9. 1 2 Fu, Y.; Chen, X.; Gao, H. (December 2009). "A New Connected Component Analysis Algorithm Based on Max-Tree". 2009 Eighth IEEE International Conference on Dependable, Autonomic and Secure Computing. pp. 843–844. doi:10.1109/DASC.2009.150. ISBN   978-1-4244-5420-4. S2CID   6805048.
  10. 1 2 Grana, C.; Borghesani, D.; Santinelli, P.; Cucchiara, R. (August 2010). "High Performance Connected Components Labeling on FPGA". 2010 Workshops on Database and Expert Systems Applications. pp. 221–225. doi:10.1109/DEXA.2010.57. ISBN   978-1-4244-8049-4. S2CID   6905027.
  11. Vincent, Luc; Soille, Pierre (June 1991). "Watersheds in digital spaces: an efficient algorithm based on immersion simulations". IEEE Transactions on Pattern Analysis and Machine Intelligence. 13 (6): 583. doi:10.1109/34.87344. S2CID   15436061.
  12. Abubaker, A; Qahwaji, R; Ipson, S; Saleh, M (2007). "One Scan Connected Component Labeling Technique". 2007 IEEE International Conference on Signal Processing and Communications. p. 1283. doi:10.1109/ICSPC.2007.4728561. ISBN   978-1-4244-1235-8. S2CID   10710012.
  13. Shapiro, L.; Stockman, G. (2002). Computer Vision (PDF). Prentice Hall. pp. 69–73.
  14. Introduction to Algorithms, , pp498
  15. Lifeng He; Yuyan Chao; Suzuki, K. (1 May 2008). "A Run-Based Two-Scan Labeling Algorithm". IEEE Transactions on Image Processing. 17 (5): 749–756. Bibcode:2008ITIP...17..749H. doi:10.1109/TIP.2008.919369. PMID   18390379.
  16. Kenji Suzuki; Isao Horiba; Noboru Sugie (2003). "Linear-time connected-component labeling based on sequential local operations". Computer Vision and Image Understanding. 89: 1–23. doi:10.1016/S1077-3142(02)00030-9.
  17. Yujie Han; Robert A. Wagner (1990). "An efficient and fast parallel-connected component algorithm". Journal of the ACM. 37 (3): 626. doi: 10.1145/79147.214077 . S2CID   17867876.
  18. Grana, C.; Bolelli, F.; Baraldi, L.; Vezzani, R. (2016). "YACCLAB - Yet Another Connected Components Labeling Benchmark" (PDF). 23rd International Conference on Pattern Recognition. Cancún.
  19. "Yet Another Connected Components Labeling Benchmark: Prittt/YACCLAB". GitHub . 2019-02-18.
  20. Bailey, D. G.; Johnston, C. T.; Ma, Ni (September 2008). "Connected components analysis of streamed images". 2008 International Conference on Field Programmable Logic and Applications. pp. 679–682. doi:10.1109/FPL.2008.4630038. ISBN   978-1-4244-1960-9. S2CID   6503327.
  21. M. J. Klaiber; D. G. Bailey; Y. Baroud; S. Simon (2015). "A Resource-Efficient Hardware Architecture for Connected Components Analysis". IEEE Transactions on Circuits and Systems for Video Technology. 26 (7): 1334–1349. doi:10.1109/TCSVT.2015.2450371. S2CID   10464417.

General