This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these template messages)
|
Superiorization is an iterative method for constrained optimization. It is used for improving the efficacy of an iterative method whose convergence is resilient to certain kinds of perturbations. Such perturbations are designed to "force" the perturbed algorithm to produce more useful results for the intended application than the ones that are produced by the original iterative algorithm. The perturbed algorithm is called the superiorized version of the original unperturbed algorithm. If the original algorithm is computationally efficient and useful in terms of the target application and if the perturbations are inexpensive to calculate, the method may be used to steer iterates without additional computation cost.
The superiorization methodology is very general and has been used successfully in many important practical applications, such as iterative reconstruction of images from their projections, [1] [2] [3] single-photon emission computed tomography, [4] radiation therapy [5] [6] [7] and nondestructive testing, [8] just to name a few. A special issue of the journal Inverse Problems [9] is devoted to superiorization, both theory [10] [11] [12] and applications. [3] [6] [7]
An important case of superiorization is when the original algorithm is "feasibility-seeking" (in the sense that it strives to find some point in a feasible region that is compatible with a family of constraints) and the perturbations that are introduced into the original iterative algorithm aim at reducing (not necessarily minimizing) a given merit function. In this case, superiorization has a unique place in optimization theory and practice.
Many constrained optimization methods are based on methods for unconstrained optimization that are adapted to deal with constraints. Such is, for example, the class of projected gradient methods wherein the unconstrained minimization inner step "leads" the process and a projection onto the whole constraints set (the feasible region) is performed after each minimization step in order to regain feasibility. This projection onto the constraints set is in itself a non-trivial optimization problem and the need to solve it in every iteration hinders projected gradient methods and limits their efficacy to only feasible sets that are "simple to project onto". Barrier methods or penalty methods likewise are based on unconstrained optimization combined with various "add-on"s that guarantee that the constraints are preserved. Regularization methods embed the constraints into a "regularized" objective function and proceed with unconstrained solution methods for the new regularized objective function.
In contrast to these approaches, the superiorization methodology can be viewed as an antipodal way of thinking. Instead of adapting unconstrained minimization algorithms to handling constraints, it adapts feasibility-seeking algorithms to reduce merit function values. This is done while retaining the feasibility-seeking nature of the algorithm and without paying a high computational price. Furthermore, general-purpose approaches have been developed for automatically superiorizing iterative algorithms for large classes of constraints sets and merit functions; these provide algorithms for many application tasks.
The superiorization methodology and perturbation resilience of algorithms are reviewed in, [13] [14] [15] see also. [16] Current work on superiorization can be appreciated from a continuously updated Internet page. [17] SNARK14 [18] is a software package for the reconstruction if 2D images from 1D projections that has a built-in capability of superiorizing any iterative algorithm for any merit function.
Mathematical optimization or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.
An inverse problem in science is the process of calculating from a set of observations the causal factors that produced them: for example, calculating an image in X-ray computed tomography, source reconstruction in acoustics, or calculating the density of the Earth from measurements of its gravity field. It is called an inverse problem because it starts with the effects and then calculates the causes. It is the inverse of a forward problem, which starts with the causes and then calculates the effects.
Tomographic reconstruction is a type of multidimensional inverse problem where the challenge is to yield an estimate of a specific system from a finite number of projections. The mathematical basis for tomographic imaging was laid down by Johann Radon. A notable example of applications is the reconstruction of computed tomography (CT) where cross-sectional images of patients are obtained in non-invasive manner. Recent developments have seen the Radon transform and its inverse used for tasks related to realistic object insertion required for testing and evaluating computed tomography use in airport security.
Gabor Tamas Herman is a Hungarian-American professor of computer science. He is Emiritas Professor of Computer Science at The Graduate Center, City University of New York (CUNY) where he was Distinguished Professor until 2017. He is known for his work on computerized tomography. He is a fellow of the Institute of Electrical and Electronics Engineers (IEEE).
Iterative reconstruction refers to iterative algorithms used to reconstruct 2D and 3D images in certain imaging techniques. For example, in computed tomography an image must be reconstructed from projections of an object. Here, iterative reconstruction techniques are usually a better, but computationally more expensive alternative to the common filtered back projection (FBP) method, which directly calculates the image in a single reconstruction step. In recent research works, scientists have shown that extremely fast computations and massive parallelism is possible for iterative reconstruction, which makes iterative reconstruction practical for commercialization.
Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.
The Kaczmarz method or Kaczmarz's algorithm is an iterative algorithm for solving linear equation systems . It was first discovered by the Polish mathematician Stefan Kaczmarz, and was rediscovered in the field of image reconstruction from projections by Richard Gordon, Robert Bender, and Gabor Herman in 1970, where it is called the Algebraic Reconstruction Technique (ART). ART includes the positivity constraint, making it nonlinear.
Discrete tomography focuses on the problem of reconstruction of binary images from a small number of their projections.
Penalty methods are a certain class of algorithms for solving constrained optimization problems.
Subgradient methods are iterative methods for solving convex minimization problems. Originally developed by Naum Z. Shor and others in the 1960s and 1970s, subgradient methods are convergent when applied even to a non-differentiable objective function. When the objective function is differentiable, sub-gradient methods for unconstrained problems use the same search direction as the method of steepest descent.
Compressed sensing is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Nyquist–Shannon sampling theorem. There are two conditions under which recovery is possible. The first one is sparsity, which requires the signal to be sparse in some domain. The second one is incoherence, which is applied through the isometric property, which is sufficient for sparse signals.
The Bregman method is an iterative algorithm to solve certain convex optimization problems involving regularization. The original version is due to Lev M. Bregman, who published it in 1967.
Iterated Local Search (ILS) is a term in applied mathematics and computer science defining a modification of local search or hill climbing methods for solving discrete optimization problems.
Augmented Lagrangian methods are a certain class of algorithms for solving constrained optimization problems. They have similarities to penalty methods in that they replace a constrained optimization problem by a series of unconstrained problems and add a penalty term to the objective; the difference is that the augmented Lagrangian method adds yet another term, designed to mimic a Lagrange multiplier. The augmented Lagrangian is related to, but not identical with the method of Lagrange multipliers.
Coordinate descent is an optimization algorithm that successively minimizes along coordinate directions to find the minimum of a function. At each iteration, the algorithm determines a coordinate or coordinate block via a coordinate selection rule, then exactly or inexactly minimizes over the corresponding coordinate hyperplane while fixing all other coordinates or coordinate blocks. A line search along the coordinate direction can be performed at the current iterate to determine the appropriate step size. Coordinate descent is applicable in both differentiable and derivative-free contexts.
The Landweber iteration or Landweber algorithm is an algorithm to solve ill-posed linear inverse problems, and it has been extended to solve non-linear problems that involve constraints. The method was first proposed in the 1950s by Louis Landweber, and it can be now viewed as a special case of many other more general methods.
Proximal gradient methods are a generalized form of projection used to solve non-differentiable convex optimization problems.
Simultaneous algebraic reconstruction technique (SART) is a computerized tomography (CT) imaging algorithm useful in cases when the projection data is limited; it was proposed by Anders Andersen and Avinash Kak in 1984. It generates a good reconstruction in just one iteration and it is superior to standard algebraic reconstruction technique (ART).
Electrical capacitance volume tomography (ECVT) is a non-invasive 3D imaging technology originally developed in UK and Poland and applied primarily to multiphase flows. It was then re-introduced by W. Warsito, Q. Marashdeh, and L.-S. Fan inspired by the early publications of the UK and Polish teams an extension of the conventional electrical capacitance tomography (ECT). In conventional ECT, sensor plates are distributed around a surface of interest. Measured capacitance between plate combinations is used to reconstruct 2D images (tomograms) of material distribution. In ECT, the fringing field from the edges of the plates is viewed as a source of distortion to the final reconstructed image and is thus mitigated by guard electrodes. ECVT exploits this fringing field and expands it through 3D sensor designs that deliberately establish an electric field variation in all three dimensions. The image reconstruction algorithms are similar in nature to ECT; nevertheless, the reconstruction problem in ECVT is more complicated. The sensitivity matrix of an ECVT sensor is more ill-conditioned and the overall reconstruction problem is more ill-posed compared to ECT. The ECVT approach to sensor design allows direct 3D imaging of the outrounded geometry. This is different than 3D-ECT that relies on stacking images from individual ECT sensors. 3D-ECT can also be accomplished by stacking frames from a sequence of time intervals of ECT measurements. Because the ECT sensor plates are required to have lengths on the order of the domain cross-section, 3D-ECT does not provide the required resolution in the axial dimension. ECVT solves this problem by going directly to the image reconstruction and avoiding the stacking approach. This is accomplished by using a sensor that is inherently three-dimensional.