Superiorization

Last updated

Superiorization is an iterative method for constrained optimization. It is used for improving the efficacy of an iterative method whose convergence is resilient to certain kinds of perturbations. Such perturbations are designed to "force" the perturbed algorithm to produce more useful results for the intended application than the ones that are produced by the original iterative algorithm. The perturbed algorithm is called the superiorized version of the original unperturbed algorithm. If the original algorithm is computationally efficient and useful in terms of the target application and if the perturbations are inexpensive to calculate, the method may be used to steer iterates without additional computation cost.

Contents

Areas of application

The superiorization methodology is very general and has been used successfully in many important practical applications, such as iterative reconstruction of images from their projections, [1] [2] [3] single-photon emission computed tomography, [4] radiation therapy [5] [6] [7] and nondestructive testing, [8] just to name a few. A special issue of the journal Inverse Problems [9] is devoted to superiorization, both theory [10] [11] [12] and applications. [3] [6] [7]

Objective function reduction and relation with constrained optimization

An important case of superiorization is when the original algorithm is "feasibility-seeking" (in the sense that it strives to find some point in a feasible region that is compatible with a family of constraints) and the perturbations that are introduced into the original iterative algorithm aim at reducing (not necessarily minimizing) a given merit function. In this case, superiorization has a unique place in optimization theory and practice.

Many constrained optimization methods are based on methods for unconstrained optimization that are adapted to deal with constraints. Such is, for example, the class of projected gradient methods wherein the unconstrained minimization inner step "leads" the process and a projection onto the whole constraints set (the feasible region) is performed after each minimization step in order to regain feasibility. This projection onto the constraints set is in itself a non-trivial optimization problem and the need to solve it in every iteration hinders projected gradient methods and limits their efficacy to only feasible sets that are "simple to project onto". Barrier methods or penalty methods likewise are based on unconstrained optimization combined with various "add-on"s that guarantee that the constraints are preserved. Regularization methods embed the constraints into a "regularized" objective function and proceed with unconstrained solution methods for the new regularized objective function.

In contrast to these approaches, the superiorization methodology can be viewed as an antipodal way of thinking. Instead of adapting unconstrained minimization algorithms to handling constraints, it adapts feasibility-seeking algorithms to reduce merit function values. This is done while retaining the feasibility-seeking nature of the algorithm and without paying a high computational price. Furthermore, general-purpose approaches have been developed for automatically superiorizing iterative algorithms for large classes of constraints sets and merit functions; these provide algorithms for many application tasks.

Further sources

The superiorization methodology and perturbation resilience of algorithms are reviewed in, [13] [14] [15] see also. [16] Current work on superiorization can be appreciated from a continuously updated Internet page. [17] SNARK14 [18] is a software package for the reconstruction if 2D images from 1D projections that has a built-in capability of superiorizing any iterative algorithm for any merit function.

Related Research Articles

<span class="mw-page-title-main">Mathematical optimization</span> Study of mathematical algorithms for optimization problems

Mathematical optimization or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.

An inverse problem in science is the process of calculating from a set of observations the causal factors that produced them: for example, calculating an image in X-ray computed tomography, source reconstruction in acoustics, or calculating the density of the Earth from measurements of its gravity field. It is called an inverse problem because it starts with the effects and then calculates the causes. It is the inverse of a forward problem, which starts with the causes and then calculates the effects.

<span class="mw-page-title-main">Tomographic reconstruction</span> Estimate object properties from a finite number of projections

Tomographic reconstruction is a type of multidimensional inverse problem where the challenge is to yield an estimate of a specific system from a finite number of projections. The mathematical basis for tomographic imaging was laid down by Johann Radon. A notable example of applications is the reconstruction of computed tomography (CT) where cross-sectional images of patients are obtained in non-invasive manner. Recent developments have seen the Radon transform and its inverse used for tasks related to realistic object insertion required for testing and evaluating computed tomography use in airport security.

Gabor Tamas Herman is a Hungarian-American professor of computer science. He is Emiritas Professor of Computer Science at The Graduate Center, City University of New York (CUNY) where he was Distinguished Professor until 2017. He is known for his work on computerized tomography. He is a fellow of the Institute of Electrical and Electronics Engineers (IEEE).

<span class="mw-page-title-main">Iterative reconstruction</span>

Iterative reconstruction refers to iterative algorithms used to reconstruct 2D and 3D images in certain imaging techniques. For example, in computed tomography an image must be reconstructed from projections of an object. Here, iterative reconstruction techniques are usually a better, but computationally more expensive alternative to the common filtered back projection (FBP) method, which directly calculates the image in a single reconstruction step. In recent research works, scientists have shown that extremely fast computations and massive parallelism is possible for iterative reconstruction, which makes iterative reconstruction practical for commercialization.

Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.

The Kaczmarz method or Kaczmarz's algorithm is an iterative algorithm for solving linear equation systems . It was first discovered by the Polish mathematician Stefan Kaczmarz, and was rediscovered in the field of image reconstruction from projections by Richard Gordon, Robert Bender, and Gabor Herman in 1970, where it is called the Algebraic Reconstruction Technique (ART). ART includes the positivity constraint, making it nonlinear.

<span class="mw-page-title-main">Discrete tomography</span> Reconstruction of binary images from a small number of their projections

Discrete tomography focuses on the problem of reconstruction of binary images from a small number of their projections.

Penalty methods are a certain class of algorithms for solving constrained optimization problems.

Subgradient methods are iterative methods for solving convex minimization problems. Originally developed by Naum Z. Shor and others in the 1960s and 1970s, subgradient methods are convergent when applied even to a non-differentiable objective function. When the objective function is differentiable, sub-gradient methods for unconstrained problems use the same search direction as the method of steepest descent.

Compressed sensing is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Nyquist–Shannon sampling theorem. There are two conditions under which recovery is possible. The first one is sparsity, which requires the signal to be sparse in some domain. The second one is incoherence, which is applied through the isometric property, which is sufficient for sparse signals.

The Bregman method is an iterative algorithm to solve certain convex optimization problems involving regularization. The original version is due to Lev M. Bregman, who published it in 1967.

<span class="mw-page-title-main">Iterated local search</span>

Iterated Local Search (ILS) is a term in applied mathematics and computer science defining a modification of local search or hill climbing methods for solving discrete optimization problems.

Augmented Lagrangian methods are a certain class of algorithms for solving constrained optimization problems. They have similarities to penalty methods in that they replace a constrained optimization problem by a series of unconstrained problems and add a penalty term to the objective; the difference is that the augmented Lagrangian method adds yet another term, designed to mimic a Lagrange multiplier. The augmented Lagrangian is related to, but not identical with the method of Lagrange multipliers.

Coordinate descent is an optimization algorithm that successively minimizes along coordinate directions to find the minimum of a function. At each iteration, the algorithm determines a coordinate or coordinate block via a coordinate selection rule, then exactly or inexactly minimizes over the corresponding coordinate hyperplane while fixing all other coordinates or coordinate blocks. A line search along the coordinate direction can be performed at the current iterate to determine the appropriate step size. Coordinate descent is applicable in both differentiable and derivative-free contexts.

The Landweber iteration or Landweber algorithm is an algorithm to solve ill-posed linear inverse problems, and it has been extended to solve non-linear problems that involve constraints. The method was first proposed in the 1950s by Louis Landweber, and it can be now viewed as a special case of many other more general methods.

<span class="mw-page-title-main">Proximal gradient method</span>

Proximal gradient methods are a generalized form of projection used to solve non-differentiable convex optimization problems.

Simultaneous algebraic reconstruction technique (SART) is a computerized tomography (CT) imaging algorithm useful in cases when the projection data is limited; it was proposed by Anders Andersen and Avinash Kak in 1984. It generates a good reconstruction in just one iteration and it is superior to standard algebraic reconstruction technique (ART).

Electrical capacitance volume tomography (ECVT) is a non-invasive 3D imaging technology originally developed in UK and Poland and applied primarily to multiphase flows. It was then re-introduced by W. Warsito, Q. Marashdeh, and L.-S. Fan inspired by the early publications of the UK and Polish teams an extension of the conventional electrical capacitance tomography (ECT). In conventional ECT, sensor plates are distributed around a surface of interest. Measured capacitance between plate combinations is used to reconstruct 2D images (tomograms) of material distribution. In ECT, the fringing field from the edges of the plates is viewed as a source of distortion to the final reconstructed image and is thus mitigated by guard electrodes. ECVT exploits this fringing field and expands it through 3D sensor designs that deliberately establish an electric field variation in all three dimensions. The image reconstruction algorithms are similar in nature to ECT; nevertheless, the reconstruction problem in ECVT is more complicated. The sensitivity matrix of an ECVT sensor is more ill-conditioned and the overall reconstruction problem is more ill-posed compared to ECT. The ECVT approach to sensor design allows direct 3D imaging of the outrounded geometry. This is different than 3D-ECT that relies on stacking images from individual ECT sensors. 3D-ECT can also be accomplished by stacking frames from a sequence of time intervals of ECT measurements. Because the ECT sensor plates are required to have lengths on the order of the domain cross-section, 3D-ECT does not provide the required resolution in the axial dimension. ECVT solves this problem by going directly to the image reconstruction and avoiding the stacking approach. This is accomplished by using a sensor that is inherently three-dimensional.

References

  1. G.T. Herman, Fundamentals of Computerized Tomography: Image Reconstruction from Projections, Springer-Verlag, London, UK, 2nd Edition, 2009. doi : 10.1007/978-1-84628-723-7
  2. E.S. Helou, M.V.W. Zibetti and E.X. Miqueles, Superiorization of incremental optimization algorithms for statistical tomographic image reconstruction, Inverse Problems, Vol. 33 (2017), 044010. doi : 10.1088/1361-6420/33/4/044010
  3. 1 2 Q. Yang, W. Cong and G. Wang, Superiorization-based multi-energy CT image reconstruction, Inverse Problems, Vol. 33 (2017), 044014. doi : 10.1088/1361-6420/aa5e0a
  4. S. Luo and T. Zhou, Superiorization of EM algorithm and its application in single-photon emission computed tomography (SPECT), Inverse Problems and Imaging, Vol. 8, pp. 223–246, (2014). doi : 10.3934/ipi.2014.8.223
  5. R. Davidi, Y. Censor, R.W. Schulte, S. Geneser and L. Xing, Feasibility-seeking and superiorization algorithms applied to inverse treatment planning in radiation therapy, Contemporary Mathematics, Vol. 636, pp. 83–92, (2015). doi : 10.1090/conm/636/12729
  6. 1 2 E. Bonacker, A. Gibali, K-H. Küfer and P. Süss, Speedup of lexicographic optimization by superiorization and its applications to cancer radiotherapy treatment, Inverse Problems, Vol. 33 (2017), 044012. doi : 10.1088/1361-6420/33/4/044012
  7. 1 2 J. Zhu and S. Penfold, Total variation superiorization in dual-energy CT reconstruction for proton therapy treatment planning, Inverse Problems, Vol. 33 (2017), 044013. doi : 10.1088/1361-6420/33/4/04401
  8. M.J. Schrapp and G.T. Herman, Data fusion in X-ray computed tomography using a superiorization approach, Review of Scientific Instruments, Vol. 85, 053701 (9pp), (2014). doi : 10.1063/1.4872378
  9. Superiorization: Theory and Applications, Special Issue of the journal Inverse Problems, Volume 33, Number 4, April 2017
  10. H. He and H-K. Xu, Perturbation resilience and superiorization methodology of averaged mappings, Inverse Problems, Vol. 33 (2017), 044007. doi : 10.1088/1361-6420/33/4/044007
  11. H-K. Xu, Bounded perturbation resilience and superiorization techniques for the projected scaled gradient method, Inverse Problems, Vol. 33 (2017), 044008. doi : 10.1088/1361-6420/33/4/044008
  12. Nikazad, Touraj, and Mokhtar Abbasi. "A unified treatment of some perturbed fixed point iterative methods with an infinite pool of operators." Inverse Problems 33.4 (2017): 044002. doi : 10.1088/1361-6420/33/4/044002
  13. G.T. Herman, E. Garduño, R. Davidi and Y. Censor, Superiorization: An optimization heuristic for medical physics, Medical Physics, Vol. 39, pp. 5532–5546, (2012). doi : 10.1118/1.4745566
  14. G.T. Herman, Superiorization for image analysis, in: Combinatorial Image Analysis, Lecture Notes in Computer Science Vol. 8466, Springer, 2014, pp. 1–7. doi : 10.1007/978-3-319-07148-0_1
  15. Y. Censor, Weak and strong superiorization: Between feasibility-seeking and minimization, Analele Stiintifice ale Universitatii Ovidius Constanta-Seria Matematica, Vol. 23, pp. 41–54, (2015). doi : 10.1515/auom-2015-0046
  16. Y. Censor, R. Davidi, G.T. Herman, R.W. Schulte and L. Tetruashvili, Projected subgradient minimization versus superiorization, Journal of Optimization Theory and Applications, Vol. 160, pp. 730–747, (2014). doi : 10.1007/s10957-013-0408-3
  17. "Superiorization". math.haifa.ac.il.
  18. "Snark14 – Home". turing.iimas.unam.mx.