As applied in the field of computer vision, graph cut optimization can be employed to efficiently solve a wide variety of low-level computer vision problems (early vision [1] ), such as image smoothing, the stereo correspondence problem, image segmentation, object co-segmentation, and many other computer vision problems that can be formulated in terms of energy minimization.
Graph cut techniques are now increasingly being used in combination with more general spatial Artificial intelligence techniques (eg to enforce structure in Large language model output to sharpen tumour boundaries and similarly for various Self-driving car, Robotics, Google Maps applications etc).
Many of these energy minimization problems can be approximated by solving a maximum flow problem in a graph [2] (and thus, by the max-flow min-cut theorem, define a minimal cut of the graph). Under most formulations of such problems in computer vision, the minimum energy solution corresponds to the maximum a posteriori estimate of a solution.
Although many computer vision algorithms involve cutting a graph (e.g. normalized cuts), the term "graph cuts" is applied specifically to those models which employ a max-flow/min-cut optimization (other graph cutting algorithms may be considered as graph partitioning algorithms).
"Binary" problems (such as denoising a binary image) can be solved exactly using this approach; problems where pixels can be labeled with more than two different labels (such as stereo correspondence, or denoising of a grayscale image) cannot be solved exactly, but solutions produced are usually near the global optimum.
The foundational theory of graph cuts was first applied in computer vision in a legendary 1989 paper by Margaret Greig, Bruce Porteous and Allan Seheult [3] (GPS) of Durham University. In the Bayesian statistical context of smoothing noisy (or corrupted) binary images, using a Markov random field as the image prior distribution to ensure spatial consistency, they showed with a deceptively simple argument how the maximum a posteriori estimate of a binary image can be obtained exactly: by maximizing the flow through an associated image network, involving the introduction of a source and sink. The problem was therefore shown to be efficiently solvable exactly, an unexpected result due the vast size of the problem which was believed at the time to be computationally intractable (NP hard).
Prior to this result, approximate local optimisation techniques such as simulated annealing (as proposed by the Geman brothers), [4] or iterated conditional modes (a type of greedy algorithm suggested by Julian Besag) [5] were used to solve such image smoothing problems. The GPS paper combined ideas from Mathematical statistics (Bayes' theorem), Physics (Ising model), Optimisation (Energy function) and Computer science (Network flow problem) and led the move away from approximate local optimisation approaches (eg simulated annealing) to more powerful exact, or near exact, global optimisation techniques.
GPS also addressed the computational cost of the max-flow algorithm on large grids—a significant concern at the time. They proposed a partitioning algorithm (see Section 4 of GPS) involving the recursive amalgamation of non-overlapping blocks which gave a 12X increase in speed at the time. This approach recursively solved and amalgamated independent sub-graphs until the whole graph was solved. While contemporaries like Geman and Geman [4] had advocated parallel processing in the context of Simulated annealing, the GPS blocking strategy offered a deterministic structure amenable to parallelisation and anticipated modern artificial intelligence design across multiple GPUs. However, this aspect of the paper was ignored and later research focused on global search trees, such as the Boykov-Kolmogorov [6] algorithm.
Although GPS is now recognised as seminal, it was well ahead of its time and, in particular, was published years before the computing power revolution of Moore's law and GPUs. Significantly, GPS was published in a mathematical statistics, rather than a computer vision, journal and this lead to it being overlooked by the computer vision community for many years. It is unofficially known as "The Velvet Underground" paper of computer vision (ie although very few computer vision people read the paper [bought the record], those that did started new research [formed a band]).
Although the general -colour problem is NP hard for the GPS approach has turned out to have wide applicability in general computer vision problems. This was first demonstrated by Boykov, Veksler and Zabih [2] who, in a seminal paper published more than 10 years after the original GPS paper, and other important papers, lit the blue touch paper for the general adoption of graph cut techniques in computer vision. They showed that, for general problems, the GPS approach can be applied iteratively to sequences of binary problems to yield near optimal solutions.
Despite the foundational nature of the GPS work, formal recognition from the computer vision community has predominantly gone to the researchers who followed them to extend and popularise the graph cut method. For example, Boykov, Veksler and Zabih [2] deservedly received a Helmholtz Prize from the ICCV in 2011 [7] . This prize recognises ICCV papers from 10 or more years earlier that have had a significant impact on computer vision research.
In 2011, Couprie et al. [8] proposed a general image segmentation framework, called the "Power Watershed", that minimized a real-valued indicator function from [0,1] over a graph, constrained by user seeds (or unary terms) set to 0 or 1, in which the minimization of the indicator function over the graph is optimized with respect to an exponent . When , the Power Watershed is optimized by graph cuts, when the Power Watershed is optimized by shortest paths, is optimized by the random walker algorithm and is optimized by the watershed algorithm. In this way, the Power Watershed may be viewed as a generalization of graph cuts that provides a straightforward connection with other energy optimization segmentation/clustering algorithms.
where the energy is composed of two different models ( and ):
— unary term describing the likelihood of each color.
— binary term describing the coherence between neighborhood pixels.
Graph cuts methods have become popular alternatives to the level set-based approaches for optimizing the location of a contour (see [9] for an extensive comparison). However, graph cut approaches have been criticized in the literature for several issues:
While graph cuts provide mathematically optimal solutions for specific energy functions, their use as a standalone method for general object recognition declined with the advent of deep learning. The primary limitations include:
However, they remain a standard tool for interactive segmentation (e.g., rotoscoping in visual effects), where a user provides the semantic intent (via scribbles) and the algorithm handles the boundary precision. As described below, they are increasingly being integrated into modern artificial intelligence.
The Boykov-Kolmogorov algorithm [6] is an efficient way to compute the max-flow for computer vision-related graphs.
The Sim Cut algorithm [17] approximates the minimum graph cut. The algorithm implements a solution by simulation of an electrical network. This is the approach suggested by Cederbaum's maximum flow theorem. [18] [19] Acceleration of the algorithm is possible through parallel computing.
As of the mid-2020s, graph cuts have evolved from standalone solvers into components within deep learning frameworks. While convolutional neural networks (CNNs) and Transformers excel at semantic recognition, they often produce boundaries that lack geometric precision. Graph cut algorithms are used to address this by enforcing global consistency and edge-alignment.
Traditional max-flow/min-cut algorithms are discrete and non-differentiable, preventing their direct use in backpropagation. To overcome this, researchers have developed "soft" or differentiable relaxations of the graph cut objective. Methods such as Probabilistic Graph Cuts or SoftCut allow the gradient of the energy function to be computed with respect to the edge weights. This enables a neural network to learn the parameters of the energy function (the cost of cutting specific edges) end-to-end, effectively treating the graph cut solver as a specific layer within the network architecture.
In weakly supervised learning and medical image segmentation, the graph cut energy formulation is often utilized as a regularization loss function (often termed "Graph Cut Loss" or "Boundary Loss"). Instead of running a solver during inference, the network is trained to minimize a loss term that approximates the min-cut energy. This penalizes the network for predicting noisy or fuzzy boundaries, forcing the output segmentation to align with high-contrast edges in the source image without requiring an iterative solver at inference time.
With the rise of multimodal large language models (MLLMs) and vision foundation models (such as the Segment Anything Model), graph cuts have found a renewed utility in the data curation pipeline.
Training these large-scale models requires massive datasets of high-quality segmentation masks, which are prohibitively expensive to generate manually pixel-by-pixel. Graph cut algorithms are employed to scale this process via weak supervision:
Pseudo-label generation: Annotators provide cheap inputs (bounding boxes or text prompts), and graph cut algorithms propagate these cues to generate dense, pixel-perfect masks (ground truth) used to train the transformer models.