NLPQLP

Last updated

The Fortran subroutine NLPQLP, a newer version of NLPQL, solves smooth nonlinear programming problems by a sequential quadratic programming (SQP) algorithm. The new version is specifically tuned to run under distributed systems. In case of computational errors, caused for example by inaccurate function or gradient evaluations, a non-monotone line search is activated. The code is easily transformed to C by f2c and is widely used in academia and industry.

Related Research Articles

Multi-disciplinary design optimization (MDO) is a field of engineering that uses optimization methods to solve design problems incorporating a number of disciplines. It is also known as multidisciplinary system design optimization (MSDO), and Multidisciplinary Design Analysis and Optimization (MDAO).

Hidden-line removal

Solid objects are usually modeled by polyhedra in a computer representation. A face of a polyhedron is a planar polygon bounded by straight line segments, called edges. Curved surfaces are usually approximated by a polygon mesh. Computer programs for line drawings of opaque objects must be able to decide which edges or which parts of the edges are hidden by an object itself or by other objects. This problem is known as hidden-line removal.

Coarray Fortran (CAF), formerly known as F--, started as an extension of Fortran 95/2003 for parallel processing created by Robert Numrich and John Reid in the 1990s. The Fortran 2008 standard now includes coarrays, as decided at the May 2005 meeting of the ISO Fortran Committee; the syntax in the Fortran 2008 standard is slightly different from the original CAF proposal.

Concurrent computing is a form of computing in which several computations are executed concurrently—during overlapping time periods—instead of sequentially—with one completing before the next starts.

SNOPT, for Sparse Nonlinear OPTimizer, is a software package for solving large-scale nonlinear optimization problems written by Philip Gill, Walter Murray and Michael Saunders. SNOPT is mainly written in Fortran, but interfaces to C, C++, Python and MATLAB are available.

Sequential quadratic programming (SQP) is an iterative method for constrained nonlinear optimization. SQP methods are used on mathematical problems for which the objective function and the constraints are twice continuously differentiable.

COIN-OR

Computational Infrastructure for Operations Research (COIN-OR), is a project that aims to "create for mathematical software what the open literature is for mathematical theory." The open literature provides the operations research (OR) community with a peer-review process and an archive. Papers in operations research journals on mathematical theory often contain supporting numerical results from computational studies. The software implementations, models, and data used to produce the numerical results are typically not published. The status quo impeded researchers needing to reproduce computational results, make fair comparisons, and extend the state of the art.

UTEXAS

UTEXAS is a slope stability analysis program written by Stephen G. Wright of the University of Texas at Austin. The program is used in the field of civil engineering to analyze levees, earth dams, natural slopes, and anywhere there is concern for mass wasting. UTEXAS finds the factor of safety for the slope and the critical failure surface. Recently the software was used to help determine the reasons behind the failure of I-walls during Hurricane Katrina.

FortMP is a software package for solving large-scale optimization problems. It solves linear programming problems, quadratic programming problems and mixed integer programming problems. Its robustness has been explored and published in the Mathematical Programming journal. FortMP is available as a standalone executable that accepts input in MPS format and as a library with interfaces in C and Fortran. It is also supported in the AMPL modeling system.

Dimitri Bertsekas

Dimitri Panteli Bertsekas is an applied mathematician, electrical engineer, and computer scientist, a McAfee Professor at the Department of Electrical Engineering and Computer Science in School of Engineering at the Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, and also a Fulton Professor of Computational Decision Making at Arizona State University, Tempe.

Augmented Lagrangian methods are a certain class of algorithms for solving constrained optimization problems. They have similarities to penalty methods in that they replace a constrained optimization problem by a series of unconstrained problems and add a penalty term to the objective; the difference is that the augmented Lagrangian method adds yet another term, designed to mimic a Lagrange multiplier. The augmented Lagrangian is related to, but not identical with the method of Lagrange multipliers.

NPSOL is a software package that performs numerical optimization. It solves nonlinear constrained problems using the sequential quadratic programming algorithm. It was written in Fortran by Philip Gill of UCSD and Walter Murray, Michael Saunders and Margaret Wright of Stanford University. The name derives from a combination of NP for nonlinear programming and SOL, the Systems Optimization Laboratory at Stanford.

For several years parallel hardware was only available for distributed computing but recently it is becoming available for the low end computers as well. Hence it has become inevitable for software programmers to start writing parallel applications. It is quite natural for programmers to think sequentially and hence they are less acquainted with writing multi-threaded or parallel processing applications. Parallel programming requires handling various issues such as synchronization and deadlock avoidance. Programmers require added expertise for writing such applications apart from their expertise in the application domain. Hence programmers prefer to write sequential code and most of the popular programming languages support it. This allows them to concentrate more on the application. Therefore, there is a need to convert such sequential applications to parallel applications with the help of automated tools. The need is also non-trivial because large amount of legacy code written over the past few decades needs to be reused and parallelized.

In computer science, a monotone priority queue is a variant of the priority queue abstract data type in which the priorities of extracted items are required to form a monotonic sequence. That is, for a priority queue in which each successively extracted item is the one with the minimum priority, the minimum priority should be monotonically increasing. Conversely for a max-heap the maximum priority should be monotonically decreasing. The assumption of monotonicity arises naturally in several applications of priority queues, and can be used as a simplifying assumption to speed up certain types of priority queues.

Artelys Knitro is a commercial software package for solving large scale nonlinear mathematical optimization problems.

Bucket queue Data structure for integer priorities

A bucket queue is a data structure that implements the priority queue abstract data type: it maintains a dynamic collection of elements with numerical priorities and allows quick access to the element with minimum priority. In the bucket queue, the priorities must be integers, and it is particularly suited to applications in which the priorities have a small range. A bucket queue has the form of an array of buckets: an array data structure, indexed by the priorities, whose cells contain collections of items with the same priority as each other. With this data structure, insertion of elements and changes of their priority take constant time. Searching for and removing the minimum-priority element takes time proportional to the number of buckets or, by maintaining a pointer to the most recently found bucket, in time proportional to the difference in priorities between successive operations.

In machine learning, hyperparameter optimization or tuning is the problem of choosing a set of optimal hyperparameters for a learning algorithm. A hyperparameter is a parameter whose value is used to control the learning process. By contrast, the values of other parameters are learned.

SuanShu is a Java math library. It is open-source under Apache License 2.0 available in GitHub. SuanShu is a large collection of Java classes for basic numerical analysis, statistics, and optimization. It implements a parallel version of the adaptive strassen's algorithm for fast matrix multiplication. SuanShu has been quoted and used in a number of academic works.

References